11.1 C
Thursday, February 29, 2024

Introducing Terraform help for Amazon OpenSearch Ingestion

At present, we’re launching Terraform help for Amazon OpenSearch Ingestion. Terraform is an infrastructure as code (IaC) device that helps you construct, deploy, and handle cloud sources effectively. OpenSearch Ingestion is a completely managed, serverless information collector that delivers real-time log, metric, and hint information to Amazon OpenSearch Service domains and Amazon OpenSearch Serverless collections. On this publish, we clarify how you should utilize Terraform to deploy OpenSearch Ingestion pipelines. For example, we use an HTTP supply as enter and an Amazon OpenSearch Service area (Index) as output.

Resolution overview

The steps on this publish deploy a publicly accessible OpenSearch Ingestion pipeline with Terraform, together with different supporting sources which can be wanted for the pipeline to ingest information into Amazon OpenSearch. We’ve got carried out the Tutorial: Ingesting information into a website utilizing Amazon OpenSearch Ingestion, utilizing Terraform.

We create the next sources with Terraform:

The pipeline that you just create exposes an HTTP supply as enter and an Amazon OpenSearch sink to avoid wasting batches of occasions.


To observe the steps on this publish, you want the next:

  • An lively AWS account.
  • Terraform put in in your native machine. For extra info, see Set up Terraform.
  • The mandatory IAM permissions required to create the AWS sources utilizing Terraform.
  • awscurl for sending HTTPS requests via the command line with AWS Sigv4 authentication. For directions on putting in this device, see the GitHub repo.

Create a listing

In Terraform, infrastructure is managed as code, known as a challenge. A Terraform challenge accommodates varied Terraform configuration information, comparable to principal.tf, supplier.tf, variables.tf, and output.df . Let’s create a listing on the server or machine that we are able to use to connect with AWS companies utilizing the AWS Command Line Interface (AWS CLI):

mkdir osis-pipeline-terraform-example

Change to the listing.

cd osis-pipeline-terraform-example

Create the Terraform configuration

Create a file to outline the AWS sources.

Enter the next configuration in principal.tf and save your file:

terraform {
  required_providers {
    aws = {
      supply  = "hashicorp/aws"
      model = "~> 5.36"

  required_version = ">= 1.2.0"

supplier "aws" {
  area = "eu-central-1"

information "aws_region" "present" {}
information "aws_caller_identity" "present" {}
locals {
    account_id = information.aws_caller_identity.present.account_id

output "ingest_endpoint_url" {
  worth = tolist(aws_osis_pipeline.instance.ingest_endpoint_urls)[0]

useful resource "aws_iam_role" "instance" {
  title = "exampleosisrole"
  assume_role_policy = jsonencode({
    Model = "2012-10-17"
    Assertion = [
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Sid    = ""
        Principal = {
          Service = "osis-pipelines.amazonaws.com"

useful resource "aws_opensearch_domain" "take a look at" {
  domain_name           = "osi-example-domain"
  engine_version = "OpenSearch_2.7"
  cluster_config {
    instance_type = "r5.giant.search"
  encrypt_at_rest {
    enabled = true
  domain_endpoint_options {
    enforce_https       = true
    tls_security_policy = "Coverage-Min-TLS-1-2-2019-07"
  node_to_node_encryption {
    enabled = true
  ebs_options {
    ebs_enabled = true
    volume_size = 10
 access_policies = <<EOF
  "Model": "2012-10-17",
  "Assertion": [
      "Effect": "Allow",
      "Principal": {
        "AWS": "${aws_iam_role.example.arn}"
      "Action": "es:*"



useful resource "aws_iam_policy" "instance" {
  title = "osis_role_policy"
  description = "Coverage for OSIS pipeline position"
  coverage = jsonencode({
    Model = "2012-10-17",
    Assertion = [
          Action = ["es:DescribeDomain"]
          Impact = "Enable"
          Useful resource = "arn:aws:es:${information.aws_region.present.title}:${native.account_id}:area/*"
          Motion = ["es:ESHttp*"]
          Impact = "Enable"
          Useful resource = "arn:aws:es:${information.aws_region.present.title}:${native.account_id}:area/osi-test-domain/*"

useful resource "aws_iam_role_policy_attachment" "instance" {
  position       = aws_iam_role.instance.title
  policy_arn = aws_iam_policy.instance.arn

useful resource "aws_cloudwatch_log_group" "instance" {
  title = "/aws/vendedlogs/OpenSearchIngestion/example-pipeline"
  retention_in_days = 365
  tags = {
    Title = "AWS Weblog OSIS Pipeline Instance"

useful resource "aws_osis_pipeline" "instance" {
  pipeline_name               = "example-pipeline"
  pipeline_configuration_body = <<-EOT
            model: "2"
                  path: "/test_ingestion_path"
                - date:
                    from_time_received: true
                    vacation spot: "@timestamp"
                - opensearch:
                    hosts: ["https://${aws_opensearch_domain.test.endpoint}"]
                    index: "application_logs"
                      sts_role_arn: "${aws_iam_role.instance.arn}"   
                      area: "${information.aws_region.present.title}"
  max_units                   = 1
  min_units                   = 1
  log_publishing_options {
    is_logging_enabled = true
    cloudwatch_log_destination {
      log_group = aws_cloudwatch_log_group.instance.title
  tags = {
    Title = "AWS Weblog OSIS Pipeline Instance"

Create the sources

Initialize the listing:

Evaluate the plan to see what sources will probably be created:

Apply the configuration and reply sure to run the plan:

The method may take round 7–10 minutes to finish.

Take a look at the pipeline

After you create the sources, you need to see the ingest_endpoint_url output displayed. Copy this worth and export it in your surroundings variable:

export OSIS_PIPELINE_ENDPOINT_URL=<Exchange with worth copied>

Ship a pattern log with awscurl. Exchange the profile along with your applicable AWS profile for credentials:

awscurl --service osis --region eu-central-1 -X POST -H "Content material-Kind: software/json" -d '[{"time":"2014-08-11T11:40:13+00:00","remote_addr":"","status":"404","request":"GET http://www.k2proxy.com//hello.html HTTP/1.1","http_user_agent":"Mozilla/4.0 (compatible; WOW64; SLCC2;)"}]' https://$OSIS_PIPELINE_ENDPOINT_URL/test_ingestion_path

You need to obtain a 200 OK as a response.

To confirm that the information was ingested within the OpenSearch Ingestion pipeline and saved within the OpenSearch, navigate to the OpenSearch and get its area endpoint. Exchange the <OPENSEARCH ENDPOINT URL> within the snippet given under and run it.

awscurl --service es --region eu-central-1 -X GET https://<OPENSEARCH ENDPOINT URL>/application_logs/_search | json_pp 

You need to see the output as under:

Clear up

To destroy the sources you created, run the next command and reply sure when prompted:

The method may take round 30–35 minutes to finish.


On this publish, we confirmed how you should utilize Terraform to deploy OpenSearch Ingestion pipelines. AWS presents varied sources so that you can rapidly begin constructing pipelines utilizing OpenSearch Ingestion and use Terraform to deploy them. You need to use varied built-in pipeline integrations to rapidly ingest information from Amazon DynamoDB, Amazon Managed Streaming for Apache Kafka (Amazon MSK), Amazon Safety Lake, Fluent Bit, and lots of extra. The next OpenSearch Ingestion blueprints mean you can construct information pipelines with minimal configuration modifications and handle them with ease utilizing Terraform. To study extra, try the Terraform documentation for Amazon OpenSearch Ingestion.

In regards to the Authors

Rahul Sharma is a Technical Account Supervisor at Amazon Internet Companies. He’s passionate in regards to the information applied sciences that assist leverage information as a strategic asset and is predicated out of NY city, New York.

Farhan Angullia is a Cloud Software Architect at AWS Skilled Companies, primarily based in Singapore. He primarily focuses on fashionable purposes with microservice software program patterns, and advocates for implementing strong CI/CD practices to optimize the software program supply lifecycle for patrons. He enjoys contributing to the open supply Terraform ecosystem in his spare time.

Arjun Nambiar is a Product Supervisor with Amazon OpenSearch Service. He focusses on ingestion applied sciences that allow ingesting information from all kinds of sources into Amazon OpenSearch Service at scale. Arjun is desirous about giant scale distributed methods and cloud-native applied sciences and is predicated out of Seattle, Washington.

Muthu Pitchaimani is a Search Specialist with Amazon OpenSearch Service. He builds large-scale search purposes and options. Muthu is within the subjects of networking and safety, and is predicated out of Austin, Texas.

Latest news
Related news


Please enter your comment!
Please enter your name here