How GoDaddy constructed an information mesh to decentralize information possession utilizing AWS Lake Formation

It is a visitor put up co-written with Ankit Jhalaria from GoDaddy.

GoDaddy is empowering on a regular basis entrepreneurs by offering all the assistance and instruments to succeed on-line. With greater than 20 million prospects worldwide, GoDaddy is the place individuals come to call their thought, construct an expert web site, entice prospects, and handle their work.

GoDaddy is a data-driven firm, and getting significant insights from information helps them drive enterprise selections to please their prospects. In 2018, GoDaddy started a big infrastructure revamp and partnered with AWS to innovate sooner than ever earlier than to fulfill the wants of its buyer development all over the world. As a part of this revamp, the GoDaddy Knowledge Platform crew needed to set the corporate up for long-term success by making a well-defined information technique and setting targets to decentralize the possession and processing of knowledge.

On this put up, we talk about how GoDaddy makes use of AWS Lake Formation to simplify safety administration and information governance at scale, and allow information as a service (DaaS) supporting organization-wide information accessibility with cross-account information sharing utilizing an information mesh structure.

The problem

Within the huge ocean of knowledge, deriving helpful insights is an artwork. Previous to the AWS partnership, GoDaddy had a shared Hadoop cluster on premises that numerous groups used to create and share datasets with different analysts for collaboration. Because the groups grew, copies of knowledge began to develop within the Hadoop Distributed File System (HDFS). A number of groups began to construct tooling to handle this problem independently, duplicating efforts. Managing permissions on these information property grew to become tougher. Making information discoverable throughout a rising variety of information catalogs and methods is one thing that had began to develop into a giant problem. Though the price of storage today is comparatively cheap, when there are a number of copies of the identical information asset accessible, it makes it tougher for analysts to effectively and reliably use the info accessible to them. Enterprise analysts want sturdy pipelines on key datasets that they rely on to make enterprise selections.

Resolution overview

In GoDaddy’s information mesh hub and spoke mannequin, a central information catalog comprises details about all the info merchandise that exist within the firm. In AWS terminology, that is the AWS Glue Knowledge Catalog. The information platform crew gives APIs, SDKs, and Airflow Operators as parts that completely different groups use to work together with the catalog. Actions equivalent to updating the metastore to mirror a brand new partition for a given information product, and sometimes working MSCK restore operations, are all dealt with within the central governance account, and Lake Formation is used to safe entry to the Knowledge Catalog.

The information platform crew launched a layer of knowledge governance that ensures greatest practices for constructing information merchandise are adopted all through the corporate. We offer the tooling to help information engineers and enterprise analysts whereas leaving the area specialists to run their information pipelines. With this strategy, we now have well-curated information merchandise which might be intuitive and straightforward to know for our enterprise analysts.

A knowledge product refers to an entity that powers insights for analytical functions. In easy phrases, this might confer with an precise dataset pointing to a location in Amazon Easy Storage Service (Amazon S3). Knowledge producers are accountable for the processing of knowledge and creating new snapshots or partitions relying on the enterprise wants. In some circumstances, information is refreshed each 24 hours, and different circumstances, each hour. Knowledge shoppers come to the info mesh to devour information, and permissions are managed within the central governance account by Lake Formation. Lake Formation makes use of AWS Useful resource Entry Supervisor (AWS RAM) to ship useful resource shares to completely different shopper accounts to have the ability to entry the info from the central governance account. We go into particulars about this performance later within the put up.

The next diagram illustrates the answer structure.

Solution architecture illustrated

Defining metadata with the central schema repository

Knowledge is barely helpful if end-users can derive significant insights from it—in any other case, it’s simply noise. As a part of onboarding with the info platform, an information producer registers their schema with the info platform together with related metadata. That is reviewed by the info governance crew that ensures greatest practices for creating datasets are adopted. We’ve got automated a few of the most typical information governance overview gadgets. That is additionally the place the place producers outline a contract about dependable information deliveries, sometimes called Service Stage Goal (SLO). After a contract is in place, the info platform crew’s background processes monitor and ship out alerts when information producers fail to fulfill their contract or SLO.

When managing permissions with Lake Formation, you register the Amazon S3 location of various S3 buckets. Lake Formation makes use of AWS RAM to share the named useful resource.

When managing sources with AWS RAM, the central governance account creates AWS RAM shares. The information platform gives a customized AWS Service Catalog product to just accept AWS RAM shares in shopper accounts.

Having constant schemas with significant names and descriptions makes the invention of datasets simple. Each information producer who’s a site knowledgeable is accountable for creating well-defined schemas that enterprise customers use to generate insights to make key enterprise selections. Knowledge producers register their schemas together with extra metadata with the info lake repository. Metadata contains details about the crew accountable for the dataset, equivalent to their SLO contract, description, and phone data. This data will get checked right into a Git repository the place automation kicks in and validates the request to ensure it conforms to requirements and greatest practices. We use AWS CloudFormation templates to provision sources. The next code is a pattern of what the registration metadata seems to be like.

Sample code of what the registration metadata looks like

As a part of the registration course of, automation steps run within the background to care for the next on behalf of the info producer:

  • Register the producer’s Amazon S3 location of the info with Lake Formation – This enables us to make use of Lake Formation for fine-grained entry to regulate the desk within the AWS Glue Knowledge Catalog that refers to this location in addition to to the underlying information.
  • Create the underlying AWS Glue database and desk – Based mostly on the schema specified by the info producer together with the metadata, we create the underlying AWS Glue database and desk within the central governance account. As a part of this, we additionally use desk properties of AWS Glue to retailer extra metadata to make use of later for evaluation.
  • Outline the SLO contract – Any business-critical dataset must have a well-defined SLO contract. As a part of dataset registration, the info producer defines a contract with a cron expression that will get utilized by the info platform to create an occasion rule in Amazon EventBridge. This rule triggers an AWS Lambda operate to look at for deliveries of the info and triggers an alert to the info producer’s Slack channel in the event that they breach the contract.

Consuming information from the info mesh catalog

When an information shopper belonging to a given line of enterprise (LOB) identifies the info product that they’re fascinated with, they submit a request to the central governance crew containing their AWS account ID that they use to question the info. The information platform gives a portal to find datasets throughout the corporate. After the request is accepted, automation runs to create an AWS RAM share with the patron account protecting the AWS Glue database and tables mapped to the info product registered within the AWS Glue Knowledge Catalog of the central governance account.

The next screenshot exhibits an instance of a useful resource share.

Example of a resource share

The patron information lake admin wants to just accept the AWS RAM share and create a useful resource hyperlink in Lake Formation to start out querying the shared dataset inside their account. We automated this course of by constructing an AWS Service Catalog product that runs within the shopper’s account as a Lambda operate that accepts shares on behalf of shoppers.

When the useful resource linked datasets can be found within the shopper account, the patron information lake admin gives grants to IAM customers and roles mapping to information shoppers inside the account. These shoppers (software or person persona) can now question the datasets utilizing AWS analytics companies of their alternative like Amazon Athena and Amazon EMR based mostly on the entry privileges granted by the patron information lake admin.

Day-to-day operations and metrics

Managing permissions utilizing Lake Formation is one a part of the general ecosystem. After permissions have been granted, information producers create new snapshots of the info at a sure cadence that may differ from each quarter-hour to a day. Knowledge producers are built-in with the info platform APIs that informs the platform about any new refreshes of the info. The information platform robotically writes a 0-byte _SUCCESS file for each dataset that will get refreshed, and notifies the subscribed shopper account by way of an Amazon Easy Notification Service (Amazon SNS) matter within the central governance account. Shoppers use this as a sign to set off their information pipelines and processes to start out processing newer model of the info using an event-driven strategy.

There are over 2,000 information merchandise constructed on the GoDaddy information mesh on AWS. On daily basis, there are millions of updates to the AWS Glue metastore within the central information governance account. There are a whole lot of knowledge producers producing information each hour in a wide selection of S3 buckets, and 1000’s of knowledge shoppers consuming information throughout a wide selection of instruments, together with Athena, Amazon EMR, and Tableau from completely different AWS accounts.

Enterprise outcomes

With the transfer to AWS, GoDaddy’s Knowledge Platform crew laid the foundations to construct a contemporary information platform that has elevated our velocity of constructing information merchandise and delighting our prospects. The information platform has efficiently transitioned from a monolithic platform to a mannequin the place possession of knowledge has been decentralized. We accelerated the info platform adoption to over 10 strains of enterprise and over 300 groups globally, and are efficiently managing a number of petabytes of knowledge unfold throughout a whole lot of accounts to assist our enterprise derive insights sooner.


GoDaddy’s hub and spoke information mesh structure constructed utilizing Lake Formation simplifies safety administration and information governance at scale, to ship information as a service supporting company-wide information accessibility. Our information mesh manages a number of petabytes of knowledge throughout a whole lot of accounts, enabling decentralized possession of well-defined datasets with automation in place, which helps the enterprise uncover information property faster and derive enterprise insights sooner.

This put up illustrates the usage of Lake Formation to construct an information mesh structure that allows a DaaS mannequin for a modernized enterprise information platform. For extra data, see Design an information mesh structure utilizing AWS Lake Formation and AWS Glue.

In regards to the Authors

Ankit Jhalaria is the Director Of Engineering on the Knowledge Platform at GoDaddy. He has over 10 years of expertise working in huge information applied sciences. Exterior of labor, Ankit loves mountain climbing, taking part in board video games, constructing IoT initiatives, and contributing to open-source initiatives.

Harsh Vardhan is an AWS Options Architect, specializing in Analytics. He has over 6 years of expertise working within the area of massive information and information science. He’s obsessed with serving to prospects undertake greatest practices and uncover insights from their information.

Kyle Tedeschi is a Principal Options Architect at AWS. He enjoys serving to prospects innovate, rework, and develop into leaders of their respective domains. Exterior of labor, Kyle is an avid snowboarder, automobile fanatic, and traveler.

Latest articles

Related articles

Leave a reply

Please enter your comment!
Please enter your name here