This appears to be connectivity issues entirely to/from the Internet or other EC2 regions from a single availability zone in us-east-1. The intra-AZ networks within us-east-1 have remained available during the event. One of the AZs we use was affected, but no external traffic flows to it. I noticed this because an auto-scale group was trying to bring up instances inside of the affected zone (our us-east-1a) and was unable to contact a server outside of AWS.
I'm definitely seeing issues in multiple AZs. It seems to be partly firewall-related, however: I've seen cases where it's hard to get an initial SYN through, but once a TCP connection is established it stays established.
I suspect that status 0 indicates that they are investigating a problem with the server, and it switches to status 1 once the problem has been confirmed.
This is also a good example of poor icon design...they aren't self-explanatory, and so they should not be used.
Non-snarky answer: They want to post that they are investigating an issue, but do not want to comment on the scope of the problem when it is still not fully understood.
We (ScraperWiki) can still access some of our US East servers. From those, can daisy chain SSH into the ones that are offline. Those servers can't see the world, but are working fine and can see other EC2 instances.
I've recently moved some of our servers over to Digital Ocean, but I'm still using AWS for DNS since their Route 53 weighted DNS with health checks work as a basic load balancer for our needs. I'm seeing DNS health checks that point at individual servers at Digital Ocean that are showing 0.91 for a status (1 being up and 0 being down. The alarms attached to the health checks keep flipping from "alarm" to "ok" and causing tons of alerts. As of about 15 minutes ago all of my checks started holding steady back at a status of 1 (ok) Good stuff :)
ELBs are also having problems. One of mine is reporting all instances out of service (transient error), then all instances in service, intermittently. But the ELB is never reachable (even when it reports all instances healthy and up). All instances behind this one are reachable, up and running. US-East-1.
Some of our other instances are reachable but some are not, same as others have been reporting.
I got into one of our machines that presented the connectivity issues from another one which was still reachable. It had no external (curl www.google.com) connectivity. Just two minutes ago it started resolving again.
It looks ok now for us (appfluence.com), but even when it was down, our website was still up, only the sync services went offline. And even then, they were accessible from the web server...
Specific availability zones in a region are mapped per-account, so your east-1c might be my east-1a:
"To ensure that resources are distributed across the Availability Zones for a region, we independently map Availability Zones to identifiers for each account. For example, your Availability Zone us-east-1a might not be the same location as us-east-1a for another account. Note that there's no way for you to coordinate Availability Zones between accounts."
I wonder how that works with new zones. I remember us-east-1e being added separately to the original four. Presumably, that one's the same for all accounts that'd already signed up at the time.