Facebook Outage Affects Sites That Used Social Network’s Log-In System

Share Button

Facebook, Instagram and other high-profile sites using Facebook’s log-in system suffered outages lasting around 45 minutes to an hour Monday evening Pacific Time.

Dynatrace analysis showed that over 7,500 websites were impacted by the Facebook outage because they were using Facebook as a third-party service.

Excess Logic continues reposting interesting articles about hi-tech, startups and new technologies to draw attention to e-waste recycling of used computer, lab, test, data center, R&D, computer and electronic equipment. Please stop disposing used equipment into a dumpster. Recycle used electronics with Excess Logic for free.

A group called Lizard Squad claimed responsibility for launching a Distributed Denial of Service (DDoS) attack that took down or slowed many sites using the Facebook authentication system. Facebook denied these claims. The company blamed a configuration change for the outage.

“[It] was not the result of a third-party attack but instead occurred after we introduced a change that affected our configuration systems,” a Facebook spokeswoman told The Wall Street Journal.

Read how to recycle used data center equipment in San Jose, Santa Clara, Sunnyvale

Social-Network-Log-In-System

Social-Network-Log-In-System

Examples of online services that suffered from the Facebook outage included hook-up site Tinder and AOL’s Instant Messenger AIM. Facebook-owned Instagram was down. Facebook migrated Instagram infrastructure from Amazon Web Services into its own data centers after it bought the company.

Twitter was extremely active, including Lizard Squad itself:

Lizard Squad should be familiar to most following a cyberattack on Sony’s Playstation videogame servers in December. The group also claimed responsibility for hacking Malaysia Airlines website.

A Facebook spokesperson said, “We moved quickly to fix the problem, and both services are back to 100 percent for everyone.”

Facebook Vice President Engineering Jay Parikh tweeted something with NSFW language about how hard it is serving more than 1.4 billion people but later deleted it. Parikh’s team spends a lot of time and resources on resiliency architecture and tests. One of the tests included shutting down an entire data center to see if services would stay up. They did.

For a look into how Facebook manages so many users, check out recent DCK article about Web caching and Facebook.

Author JASON VERGE
Permanent article address

Share Button

No Comments Yet.

Leave a comment