Automated at Scale: How Avahi Built a Dual-Modality Video Content Moderation Pipeline for Amara Social

Client

Amara Social

Location

Nashua, New Hampshire

Industry

Social Media Management Platform

Services & Tech

Amazon S3 AWS Lambda Amazon Transcribe Amazon Bedrock (Nova Pro) Amazon Rekognition Amazon DynamoDB Amazon API Gateway Amazon CloudWatch AWS IAM

Client

Amara Social

Location

Nashua, New Hampshire

Industry

Social Media Management Platform

Services & Tech

Amazon S3 AWS Lambda Amazon Transcribe Amazon Bedrock (Nova Pro) Amazon Rekognition Amazon DynamoDB Amazon API Gateway Amazon CloudWatch AWS IAM

Project Overview

Amara Social is a social media management platform that helps brands and creators automate content publishing, track engagement, and gain performance insights across channels. As user-generated content volume grew rapidly, Amara Social faced a critical gap: no automated way to detect and remove unsafe or policy-violating video content. Avahi designed and delivered a fully serverless, AWS-native content moderation pipeline that analyzes both the audio and visual dimensions of every uploaded video simultaneously. The result was a scalable, accurate moderation system that achieved zero false positives during testing, giving Amara Social a clear, evidence-based path from manual review to automated moderation at scale.

About The
 Customer

Amara Social is a social media management platform based in Nashua, New Hampshire, that empowers brands and content creators to scale their social presence through automated content publishing, engagement tracking, and cross-platform performance analytics. As Amara Social’s user base and content volume grew, so did the complexity of keeping their platform safe, compliant, and trustworthy for the creators and audiences who depend on it.

The 
Problem

As Amara Social’s platform scaled, one critical gap became impossible to ignore: the absence of any automated process for detecting and removing harmful, unsafe, or policy-violating content uploaded by users. Manual review was the only moderation mechanism in place, and manual review cannot keep pace with the velocity of user-generated content at scale.

The risks of inaction were significant and multidimensional. Harmful content, ranging from explicit visuals to threatening or discriminatory language in audio, could reach audiences before any human reviewer had a chance to act. This created real exposure across three dimensions: user trust, community guideline compliance, and regulatory risk. Advertisers and platform partners increasingly require demonstrable content safety standards, and a single high-profile moderation failure can cause lasting reputational damage.

The challenge was further complicated by the dual nature of video content. Visual and audio dimensions require entirely different analysis approaches, and no single tool addresses both. Building a solution that could operate across both modalities, at scale, in real time, without generating false positives that would incorrectly penalize legitimate creators, was a non-trivial engineering problem that required both Al expertise and careful architectural design.

Why AWS

AWS offered the only cloud ecosystem with mature, production-ready Al services spanning all three required capabilities: audio transcription, large language model inference, and computer vision, each available as a managed service that could be composed into a single automated pipeline. Rather than building or fine-tuning custom models, Amara Social could leverage AWSnative services that are continuously improved and maintained by Amazon, reducing long-term operational burden.

The serverless nature of AWS Lambda, combined with S3 event-driven triggers and the scalability of Amazon Transcribe, Bedrock, and Rekognition, meant the architecture could scale elastically with content volume, critical for a platform where upload activity is unpredictable. Amazon DynamoDB and CloudWatch rounded out the stack with structured result storage and built-in observability, providing a fully integrated, cost-transparent solution without the overhead of managing infrastructure.

Why Amara Social Chose Avahi

Avahi is a premier-tier AWS partnerwith deep expertise in designing and delivering cloud-native Al pipelines for real-world production requirements. Amara Social needed more than a vendor who could stand up AWS services, they needed a partner who could architect a multi-service Al solution, engineer nuanced prompt logic for a large language model, and deliver a system accurate enough to serve as the foundation for a full production deployment.

Avahi brought a structured, risk-aware approach to the engagement. Before writing a line of code, Avahi conducted a model assessment to identify Amazon Bedrock Nova Pro as the best-fit LLM for transcript moderation against Amara Social’s six target content categories. Avahi also built a formal fallback strategy into the statement of work, if Bedrock results proved inconsistent, Amazon Comprehend would be substituted, a level of delivery risk engineering that directly protected Amara Social’s investment and ensured the project would succeed regardless of LLM behavior.

Solution

Avahi built a fully serverless, event-driven video content moderation pipeline on AWS that analyzes both the visual and audio dimensions of user-generated video content automatically and simultaneously. The workflow is triggered the moment Amara Social uploads a video to a designated Amazon S3 bucket, an S3 event notification fires instantly, invoking the AWS Lamb dafunction that orchestrates the entire pipeline from that point forward.

For audio moderation, Lambda invokes Amazon Transcribe to convert the video’s audio track into a structured English-language transcript. That transcript is then passed to Amazon Bedrock (Nova Pro) with a custom-engineered prompt template designed specifically for Amara Social’s six harmful content categories: profanity and offensive language, hate speech, discriminatory remarks, death and violence threats, self-harm expressions, and vague descriptive threats. Bedrock returns a structured moderation result that includes flagged phrases, their timestamps, content categories, confidence scores, and plain-language explanations, enabling reviewers to understand not just what was flagged, but why.

The custom Bedrock prompt template was the most strategically significant technical decision in the project. Traditional NLP classification models cannot reliably detect contextually nuanced harmful content, vague threats and self-harm language that avoid explicit keywords are routinely missed by rule-based systems. By engineering a structured prompt for a generative LLM, Avahi delivered a text moderation capability that operates at the level of contextual understanding, with no model training, no labeled datasets, and no additional infrastructure, all within the project timeline at inference-only cost.

For visual moderation, Lambda extracts video frames at defined intervals and sends each frame to Amazon Rekognition for computer vision analysis. Rekognition evaluates each frame for nudity and sexual content, weapons and violence, and offensive symbols, returning frame-level flags with confidence scores.

All moderation results, audio flags, visual frame flags, and aggregated category-level findings, are written to Amazon DynamoDB as a structured JSON record per video, with raw transcripts and frame images stored in S3. Amazon CloudWatch captures logs, metrics, and token consumption data throughout the pipeline, providing full operational visibility and cost transparency. The complete per-video moderation report is accessible programmatically via a REST API endpoint built on Amazon API Gateway, enabling Amara Social to integrate moderation results directly into their platform workflows.

Key Deliverables

  • Fully functional audio transcription pipeline via Amazon Transcribe with results stored in Amazon DynamoDB
  • Transcript moderation via Amazon Bedrock Nova Pro with a custom prompt template targeting six harmful content categories
  • Visual frame sampling and analysis via Amazon Rekognition covering nudity, weapons, violence, and offensive symbols
  • AWS Lambda orchestration connecting all three Al services in a single automated workflow
  • Error handling for missing transcripts, failed Rekognition calls, and unsupported video formats
  • Cost estimation and monitoring via Amazon CloudWatch metrics
  • Structured moderation results stored in DynamoDB (encrypted at rest) with raw transcripts and frame images in S3
  • Public demo REST API endpoint (Amazon API Gateway) for live pipeline testing
  • Final handover report including architecture diagram, input/output documentation, cost results, and pipeline performance summary

Project
 Impact

Avahi delivered a production-grade, dual-modality content moderation pipeline that gives Amara Social the automated, scalable foundation needed to protect their platform as content volume grows. The pipeline covers 10 harmful content categories across both audio and visual dimensions, processing every uploaded video automatically without any manual intervention required to initiate analysis.

Most critically, testing validated the accuracy of the Bedrock-powered transcript moderation
approach with a result that directly addresses the highest-stakes risk in content moderation:

  • Zero false positives recorded during testing – no legitimate creator content was incorrectly flagged
  • 10 harmful content categories detected across audio and visual modalities in a single automated pipeline
  • Three AWS Al services (Transcribe, Bedrock, Rekognition) unified under Lambda orchestration into one serverless workflow
  • Inference-only cost model for LLM-based transcript moderation – no model training or labeled dataset investment required
  • Formal fallback strategy built into delivery – eliminated execution risk entirely and protected client investment

Ready to Transform Your Business with AI?

Let’s explore your high-impact AI opportunities together in a complimentary session

Ready to Transform Your Business with AI?

Let’s explore your high-impact AI opportunities together in a complimentary session

Ready to Transform Your Business with AI?

Book Your Free Ignition AI Workshop

Let’s explore your high-impact AI opportunities together in a complimentary half-day session

View Our Case Studies

See how we’ve delivered measurable results for businesses like yours