Posted:2 days ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

About the Company

Company Profile: ProductSquads was founded with a bold mission: to engineer capital efficiency through autonomous AI agents, exceptional engineering, and real-time decision intelligence. We’re building an AI-native platform that redefines how software teams deliver value—whether through code written by humans, agents, or both. Our stack combines agentic AI systems, ML pipelines, and high-performance engineering workflows. This is your chance to build not just models, but systems that think, decide, and act. We’re developing AI fabric tools, domain-intelligent agents, and real-time decision systems to power the next generation of product delivery


Job Summary:


We are seeking a skilled and detail-oriented Data Engineer to join our growing team. The ideal candidate will be responsible for designing, building, and maintaining scalable data pipelines and architectures Experience with big data technologies, cloud platforms, and ETL processes is essential us to play a key role in driving data-driven decision-making across the organization.


Responsibilities


  • Design, develop, and maintain scalable ETL/ELT pipelines on AWS cloud infrastructure.
  • Build and manage data lakes, data warehouses, using AWS services (e.g., S3, DMS, Glue, Athena, Redshift, EMR, Lambda).
  • Implement data quality checks, monitoring, logging, and alerting for pipelines.
  • Optimize data workflows for performance, scalability, and cost-efficiency.
  • Collaborate with cross-functional teams including Data Science, Operations, and Product teams to integrate and deliver data-driven solutions.
  • Ensure compliance with data governance, security, and privacy standards.
  • Write clear, maintainable documentation and contribute to data engineering best practices.
  • Carrying out POCs on new and emerging technologies to meet business requirements.
  • Experience with AWS Bedrock as well as integrating LLMs into data workflow and applications.
  • Engineer and optimize LLM prompts for tasks such as data classification, summarization, enrichment, and semantic search.
  • Create API services allowing applications to access the data using Python and AWS technologies (Lambda, API Gateway, etc).



Qualifications


  • A minimum of five years of hands-on experience with AWS cloud technologies and data engineering.



Required Skills


  • Strong expertise in AWS data services such as S3, Glue, Redshift, Athena, Lambda, and Kinesis.
  • Proficient in Python, SQL, and distributed data processing frameworks (e.g., PySpark).
  • Experience integrating LLMs via APIs (e.g., OpenAI, AWS Bedrock, etc.).
  • Experience with CI/CD pipelines, Infrastructure as Code (IaC), and version control (e.g., Terraform).
  • Understanding of data privacy, security, and compliance in AI systems.
  • Excellent problem-solving skills, communication, and collaboration abilities.
  • Familiarity with AI-related technologies, such as the foundational LLM's and associated frameworks and tools such as LangChain, Hugging Face, Bedrock, etc.



Preferred Skills


  • Experience with CI/CD pipelines, Infrastructure as Code (IaC), and version control (e.g., Terraform).
  • Familiarity with AI-related technologies, such as the foundational LLM's and associated frameworks and tools such as LangChain, Hugging Face, Bedrock, etc.

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You