November 9, 2012 – Social media giant Facebook has detailed and opened up its customized big data cluster scheduling solution Corona, an alternative to those same functions from MapReduce.

For its big data scaling and scheduling, Facebook utilized MapReduce from Apache Hadoop as the foundation that “served us well for several years,” according to a Facebook post from its engineers. However, Facebook engineers felt they started to hit a wall with scalability limitations, particularly with the MapReduce task tracker features and the varied nature of some data load scheduling. 

In early 2011, Facebook began development and testing of Corona, a cluster manager that tracks nodes in the cluster and the amount of free resources. According to Facebook engineers:

A dedicated job tracker is created for each job, and can run either in the same process as the client (for small jobs) or as a separate process in the cluster (for large jobs). One major difference from our previous Hadoop MapReduce implementation is that Corona uses push-based, rather than pull-based, scheduling. After the cluster manager receives resource requests from the job tracker, it pushes the resource grants back to the job tracker. Also, once the job tracker gets resource grants, it creates tasks and then pushes these tasks to the task trackers for running. There is no periodic heartbeat involved in this scheduling, so the scheduling latency is minimized.

Facebook began a three-month, three-phase change-over of workloads from MapReduce onto Corona in the spring of this year. Facebook’s engineers have reported upgrades in cluster utilization, job latency, scheduling frames and time-to-refill slots, and they continue to work with online upgrades to cluster managers and trackers, as well as how data tasks can be scheduled.

In addition, Facebook has open-sourced the version of Corona it’s presently running and has invited users to take a deeper look via a repository on GitHub. Social media engineers are interested in Corona’s capabilities with scheduling beyond MapReduce jobs and want to expand its use to other applications such as Peregrine.

Facebook runs custom-built infrastructure for data warehousing and analytics for approximately 1,000 technical and non-technical daily users across the company. Site engineers estimate that more than one petabyte of new data arrives every day in the Facebook warehouse, which has expanded by 2,500times in the past four years. In terms of scalability, Facebook engineers write that their largest cluster has more than 100 petabytes of data and their team processes more than 60,000 Hive queries per day.

Corona is not alone in Facebook’s recent moves in the big data and open source areas. Project Prism was formally launched in August to unify its tech teams’ views of massive data loads spread across various data centers, and Facebook received more big name support in May for its Open Compute Project to standardize data center infrastructure and hardware.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access