Immediately, we’re happy to announce new AWS Glue connectors for Azure Blob Storage and Azure Information Lake Storage that let you transfer knowledge bi-directionally between Azure Blob Storage, Azure Information Lake Storage, and Amazon Easy Storage Service (Amazon S3).
We’ve seen a requirement to design purposes that allow knowledge to be moveable throughout cloud environments and provide the potential to derive insights from a number of knowledge sources. One of many knowledge sources now you can shortly combine with is Azure Blob Storage, a managed service for storing each unstructured knowledge and structured knowledge, and Azure Information Lake Storage, a knowledge lake for analytics workloads. With these connectors, you possibly can carry the information from Azure Blob Storage and Azure Information Lake Storage individually to Amazon S3.
On this publish, we use Azure Blob Storage for instance and display how the brand new connector works, introduce the connector’s features, and give you key steps to set it up. We give you conditions, share the way to subscribe to this connector in AWS Market, and describe the way to create and run AWS Glue for Apache Spark jobs with it. Relating to the Azure Information Lake Storage Gen2 Connector, we spotlight any main variations on this publish.
AWS Glue is a serverless knowledge integration service that makes it easy to find, put together, and mix knowledge for analytics, machine studying, and software growth. AWS Glue natively integrates with varied knowledge shops comparable to MySQL, PostgreSQL, MongoDB, and Apache Kafka, together with AWS knowledge shops comparable to Amazon S3, Amazon Redshift, Amazon Relational Database Service (Amazon RDS), and Amazon DynamoDB. AWS Glue Market connectors let you uncover and combine further knowledge sources, comparable to software program as a service (SaaS) purposes and your customized knowledge sources. With only a few clicks, you possibly can seek for and choose connectors from AWS Market and start your knowledge preparation workflow in minutes.
How the connectors work
On this part, we talk about how the brand new connectors work.
Azure Blob Storage connector
This connector depends on the Spark DataSource API and calls Hadoop’s FileSystem interface. The latter has applied libraries for studying and writing varied distributed or conventional storage. This connector additionally consists of the hadoop-azure module, which helps you to run Apache Hadoop or Apache Spark jobs immediately with knowledge in Azure Blob Storage. AWS Glue masses the library from the Amazon Elastic Container Registry (Amazon ECR) repository throughout initialization (as a connector), reads the connection credentials utilizing AWS Secrets and techniques Supervisor, and reads knowledge supply configurations from enter parameters. When AWS Glue has web entry, the Spark job in AWS Glue can learn from and write to Azure Blob Storage.
We assist the next two strategies for authentication: the authentication key for Shared Key and shared entry signature (SAS) tokens:
Azure Information Lake Storage Gen2 connector
The utilization of Azure Information Lake Storage Gen2 is far the identical because the Azure Blob Storage connector. The Azure Information Lake Storage Gen2 connector makes use of the identical library because the Azure Blob Storage connector, and depends on the Spark DataSource API, Hadoop’s FileSystem interface, and the Azure Blob Storage connector for Hadoop.
As of this writing, we solely assist the Shared Key authentication technique:
Answer overview
The next structure diagram exhibits how AWS Glue connects to Azure Blob Storage for knowledge ingestion.
Within the following sections, we present you the way to create a brand new secret for Azure Blob Storage in Secrets and techniques Supervisor, subscribe to the AWS Glue connector, and transfer knowledge from Azure Blob Storage to Amazon S3.
Stipulations
You want the next conditions:
- A storage account in Microsoft Azure and your knowledge path in Azure Blob Storage. Put together the storage account credentials prematurely. For directions, consult with Create a storage account shared key.
- A Secrets and techniques Supervisor secret to retailer a Shared Key secret, utilizing one of many supporting authentication strategies.
- An AWS Identification and Entry Administration (IAM) position for the AWS Glue job with the next insurance policies:
- AWSGlueServiceRole, which permits the AWS Glue service position entry to associated companies.
- AmazonEC2ContainerRegistryReadOnly, which supplies read-only entry to Amazon EC2 Container Registry repositories. This coverage is for utilizing AWS Market’s connector libraries.
- A Secrets and techniques Supervisor coverage, which supplies learn entry to the key in Secrets and techniques Supervisor.
- An S3 bucket coverage for the S3 bucket that that you must load ETL (extract, rework, and cargo) knowledge from Azure Blob Storage.
Create a brand new secret for Azure Blob Storage in Secrets and techniques Supervisor
Full the next steps to create a secret in Secrets and techniques Supervisor to retailer the Azure Blob Storage connection strings utilizing the Shared Key authentication technique:
- On the Secrets and techniques Supervisor console, select Secrets and techniques within the navigation pane.
- Select Retailer a brand new secret.
- For Secret kind, choose Different kind of secret.
- Change the values for
accountName
,accountKey
, andcontainer
with your personal values. - Go away the remainder of the choices at their default.
- Select Subsequent.
- Present a reputation for the key, comparable to
azureblobstorage_credentials
. - Comply with the remainder of the steps to retailer the key.
Subscribe to the AWS Glue connector for Azure Blob Storage
To subscribe to the connector, full the next steps:
- Navigate to the Azure Blob Storage Connector for AWS Glue on AWS Market.
- On the product web page for the connector, use the tabs to view details about the connector, then select Proceed to Subscribe.
- Overview the pricing phrases and the vendor’s Finish Person License Settlement, then select Settle for Phrases.
- Proceed to the subsequent step by selecting Proceed to Configuration.
- On the Configure this software program web page, select the achievement choices and the model of the connector to make use of.
Now we have supplied two choices for the Azure Blob Storage Connector: AWS Glue 3.0 and AWS Glue 4.0. On this instance, we concentrate on AWS Glue 4.0. Select Proceed to Launch.
- On the Launch this software program web page, select Utilization directions to assessment the utilization directions supplied by AWS.
- While you’re able to proceed, select Activate the Glue connector from AWS Glue Studio.
The console will show the Create market connection web page in AWS Glue Studio.
Transfer knowledge from Azure Blob Storage to Amazon S3
To maneuver your knowledge to Amazon S3, you could configure the customized connection after which arrange an AWS Glue job.
Create a customized connection in AWS Glue
An AWS Glue connection shops connection data for a specific knowledge retailer, together with login credentials, URI strings, digital personal cloud (VPC) data, and extra. Full the next steps to create your connection:
- On the AWS Glue console, select Connectors within the navigation pane.
- Select Create connection.
- For Connector, select Azure Blob Storage Connector for AWS Glue.
- For Title, enter a reputation for the connection (for instance,
AzureBlobStorageConnection
). - Enter an elective description.
- For AWS secret, enter the key you created (
azureblobstorage_credentials
). - Select Create connection and activate connector.
The connector and connection data is now seen on the Connectors web page.
Create an AWS Glue job and configure connection choices
Full the next steps:
- On the AWS Glue console, select Connectors within the navigation pane.
- Select the connection you created (
AzureBlobStorageConnection
). - Select Create job.
- For Title, enter Azure Blob Storage Connector for AWS Glue. This identify ought to be distinctive amongst all of the nodes for this job.
- For Connection, select the connection you created (
AzureBlobStorageConnection
). - For Key, enter path, and for Worth, enter your Azure Blob Storage URI. For instance, once we created our new secret, we already set a container worth for the Azure Blob Storage. Right here, we enter the file path
/input_data/
. - Enter one other key-value pair. For Key, enter
fileFormat
. For Worth, enter csv, as a result of our pattern knowledge is on this format. - Optionally, if the CSV file accommodates a header line, enter one other key-value pair. For Key, enter header. For Worth, enter true.
- To preview your knowledge, select the Information preview tab, then select Begin knowledge preview session and select the IAM position outlined within the conditions.
- Select Affirm and await the outcomes to show.
- Choose S3 as Goal Location.
- Select Browse S3 to see the S3 buckets that you’ve entry to and select one because the goal vacation spot for the information output.
- For the opposite choices, use the default values.
- On the Job particulars tab, for IAM Position, select the IAM position outlined within the conditions.
- For Glue model, select your AWS Glue model.
- Proceed to create your ETL job. For directions, consult with Creating ETL jobs with AWS Glue Studio.
- Select Run to run your job.
When the job is full, you possibly can navigate to the Run particulars web page on the AWS Glue console and examine the logs in Amazon CloudWatch.
The info is ingested into Amazon S3, as proven within the following screenshot. We are actually in a position to import knowledge from Azure Blob Storage to Amazon S3.
Scaling issues
On this instance, we use the default AWS Glue capability, 10 DPU (Information Processing Models). A DPU is a standardized unit of processing capability that consists of 4 vCPUs of compute capability and 16 GB of reminiscence. To scale your AWS Glue job, you possibly can enhance the variety of DPU, and in addition reap the benefits of Auto Scaling. With Auto Scaling enabled, AWS Glue robotically provides and removes employees from the cluster relying on the workload. After you select the utmost variety of employees, AWS Glue will adapt the correct dimension of sources for the workload.
Clear up
To wash up your sources, full the next steps:
- Take away the AWS Glue job and secret in Secrets and techniques Supervisor with the next command:
- In case you are not going to make use of this connector, you possibly can cancel the subscription to the Azure Blob Storage connector:
- On the AWS Market console, go to the Handle subscriptions web page.
- Choose the subscription for the product that you just wish to cancel.
- On the Actions menu, select Cancel subscription.
- Learn the data supplied and choose the acknowledgement examine field.
- Select Sure, cancel subscription.
- Delete the information within the S3 bucket that you just used within the earlier steps.
Conclusion
On this publish, we confirmed the way to use AWS Glue and the brand new connector for ingesting knowledge from Azure Blob Storage to Amazon S3. This connector supplies entry to Azure Blob Storage, facilitating cloud ETL processes for operational reporting, backup and catastrophe restoration, knowledge governance, and extra.
We welcome any suggestions or questions within the feedback part.
Appendix
While you want SAS token authentication for Azure Information Lake Storage Gen 2, you should utilize Azure SAS Token Supplier for Hadoop. To do this, add the JAR file to your S3 bucket and configure your AWS Glue job to set the S3 location within the job parameter --extra-jars
(in AWS Glue Studio, Dependent JARs path). Then save the SAS token in Secrets and techniques Supervisor and set the worth to spark.hadoop.fs.azure.sas.mounted.token.<azure storage account>.dfs.core.home windows.web
in SparkConf utilizing script mode at runtime. Study extra in README.
References
In regards to the authors
Qiushuang Feng is a Options Architect at AWS, chargeable for Enterprise clients’ technical structure design, consulting, and design optimization on AWS Cloud companies. Earlier than becoming a member of AWS, Qiushuang labored in IT firms comparable to IBM and Oracle, and gathered wealthy sensible expertise in growth and analytics.
Noritaka Sekiyama is a Principal Huge Information Architect on the AWS Glue workforce. He’s enthusiastic about architecting fast-growing knowledge environments, diving deep into distributed massive knowledge software program like Apache Spark, constructing reusable software program artifacts for knowledge lakes, and sharing information in AWS Huge Information weblog posts.
Shengjie Luo is a Huge Information Architect on the Amazon Cloud Know-how skilled service workforce. They’re chargeable for options consulting, structure, and supply of AWS-based knowledge warehouses and knowledge lakes. They’re expert in serverless computing, knowledge migration, cloud knowledge integration, knowledge warehouse planning, and knowledge service structure design and implementation.
Greg Huang is a Senior Options Architect at AWS with experience in technical structure design and consulting for the China G1000 workforce. He’s devoted to deploying and using enterprise-level purposes on AWS Cloud companies. He possesses practically 20 years of wealthy expertise in large-scale enterprise software growth and implementation, having labored within the cloud computing area for a few years. He has in depth expertise in serving to varied forms of enterprises migrate to the cloud. Previous to becoming a member of AWS, he labored for well-known IT enterprises comparable to Baidu and Oracle.
Maciej Torbus is a Principal Buyer Options Supervisor inside Strategic Accounts at Amazon Net Providers. With in depth expertise in large-scale migrations, he focuses on serving to clients transfer their purposes and techniques to extremely dependable and scalable architectures in AWS. Outdoors of labor, he enjoys crusing, touring, and restoring classic mechanical watches.