Jobs
Interviews
14 Job openings at Aptus Data Labs
Head of Data Management

India

18 years

Not disclosed

Remote

Full Time

🔹 Job Title: Head Data Platform 📍 Location: Remote - 100% 📅 Experience: 12–18+ Years 📄 Employment Type: Full-Time 🚀 About the Role Aptus Data Labs is looking for an exceptional Data Platform Lead to architect and lead the development of our enterprise-grade cloud-native data platform. You’ll own the data foundation that fuels our enterprise AI strategy—partnering closely with the AI Lead and reporting directly to the CIO. If you’re a hands-on leader who thrives in building scalable data lake/lakehouse platforms, enabling high-performance analytics, and driving governance in AWS environments—this role is for you. 🎯 Key Responsibilities 🏗️ Data Platform Architecture & Engineering Lead the design and implementation of robust, scalable data lake and lakehouse solutions using AWS services (Redshift, Glue, S3, Lake Formation, IAM). Architect and optimize secure, reusable ETL/ELT pipelines using Glue, Python, Databricks, and other modern data tools. Oversee seamless integration of the data platform with enterprise applications, upstream sources, and AI/ML pipelines. Enable platform readiness for diverse data types—structured, semi-structured, and unstructured. 🔐 Data Governance & Security Define and enforce data governance practices including data lineage, classification, quality, access policies, and stewardship. Ensure platform security and regulatory compliance (e.g., GxP, HIPAA) with audit-ready controls and collaboration with InfoSec. Manage IAM roles, encryption, logging, and monitoring for platform reliability and integrity. 🤝 Leadership & Collaboration Work directly with the CIO and AI Lead to align data platform initiatives with business and AI strategy. Mentor and guide data engineers and platform developers; foster best practices in design, DevOps, and reusability. Act as the go-to technical leader for platform architecture across business, analytics, and IT teams. ⚙️ Innovation & Optimization Evaluate and integrate emerging data tools (Databricks, OpenSearch, AWS DataZone, etc.) to improve performance, cost-efficiency, and scalability. Drive CI/CD enablement using GitHub Actions, Jenkins, or AWS-native pipelines for faster, stable deployments. Create observability frameworks to monitor data pipeline health, detect anomalies, and ensure SLA adherence. ✅ Required Qualifications Bachelor’s or Master’s in Computer Science, Engineering, or a related field. 12–18 years of experience in data platform, data engineering, or architecture leadership roles. Hands-on expertise in AWS services: Redshift, Glue, Lake Formation, S3, IAM, CloudWatch, Kinesis. Strong experience in Python, SQL, and orchestration tools (Airflow, Step Functions). Familiarity with Databricks, Spark, and distributed computing. Proven track record in data governance, metadata management, and regulatory alignment. Experience with Git-based CI/CD and containerized environments. 🌟 Preferred Skills Background in regulated domains like Pharmaceutical or Life Sciences (GxP, HIPAA). Exposure to enterprise integration platforms like Boomi. Experience with BI tool integration (Power BI, QuickSight). Strong communication and stakeholder alignment capabilities. 🔗 Apply now and become the architect of data innovation at Aptus Data Labs. Help us empower global enterprises through the intersection of data, AI, and cloud. Show more Show less

AI Fullstack Developer

Hubli, Mangaluru, Mysuru, Bengaluru, Belgaum

4 - 9 years

INR 6.0 - 11.0 Lacs P.A.

Work from Office

Full Time

Job_Description":" About the Role: We are seeking a highly skilled AIFull-Stack Developer to join our team. The ideal candidate will be responsiblefor designing, developing, and deploying AI-driven applications. This rolerequires expertise in both front-end and back-end technologies, as well asexperience in integrating AI/ML models into scalable web applications. Key Responsibilities: AI/ML Development: - Develop and integrate AI/ML modelsinto web applications. - Optimize model performance anddeployment. - Work with frameworks likeTensorFlow, PyTorch, or OpenAI APIs. Front-End Development: - Build responsive and interactiveUI/UX using React.js, Angular, or Vue.js. - Optimize applications for speedand scalability. Back-End Development: - Develop robust RESTful or GraphQLAPIs. - Work with Node.js, Python(Django/Flask/FastAPI), - Manage databases (SQL/NoSQL) andcloud services. DevOps & Deployment: - Deploy AI models and applicationson cloud platforms (AWS, GCP, Azure). - Work with Docker, Kubernetes,CI/CD pipelines. Collaboration & Innovation: - Work closely with data scientists,designers, and product teams. - Stay updated with AI advancementsand industry best practices. Requirements Requirements: Education & Experience: - Bachelor\u2019s/Master\u2019s degree inComputer Science, AI, or a related field. - 4+ years of experience infull-stack development with AI integration. Technical Skills: - Proficiency in Python,JavaScript/TypeScript, and relevant frameworks. - Experience with AI/ML frameworks(TensorFlow, PyTorch, OpenAI API). - Strong knowledge of databases(PostgreSQL, MongoDB, Firebase). - Experience with cloud computing(AWS Lambda, Google Cloud Functions). - Proficiency in Docker, Kubernetes,and CI/CD pipelines. Soft Skills: - Strong problem-solving abilitiesand analytical thinking. - Excellent communication andteamwork skills. Must Have: - Experience with LLMs (LargeLanguage Models) and NLP. - Familiarity with MLOps practices. - Knowledge of Web3 or blockchaintechnologies. Benefits Why Join Us? - Work with cutting-edgetechnologies and a talented team. - Competitive salary and benefitspackage. - Flexible work environment withgrowth opportunities. - Access to professional developmentand learning resources. ","

AI Software Engineer

Hubli, Mangaluru, Mysuru, Bengaluru, Belgaum

2 - 3 years

INR 4.0 - 5.0 Lacs P.A.

Work from Office

Full Time

Job_Description":" AI Software Engineer Location: Bangalore Job Type: Full Time Experience Level: Senior About the Role: We are seeking a talented AI SoftwareEngineer to join our team and help develop innovative AI-driven solutions. Youwill be responsible for designing, implementing, and optimizing AI models,integrating them into software applications, and working closely with datascientists and engineers to build scalable AI-powered products. Key Responsibilities: Design,develop, and deploy AI and machine learning models for real-world applications. Workwith large datasets to train and fine-tune models, ensuring high accuracy andefficiency. Buildand optimize deep learning architectures using frameworks like TensorFlow,PyTorch, or Scikit-learn. Developand maintain AI-driven software applications, integrating models intoproduction environments. Collaboratewith cross-functional teams, including data scientists, software engineers, andproduct managers. OptimizeAI models for performance, scalability, and deployment on cloud platforms (AWS,Azure, GCP). ImplementMLOps best practices for model versioning, deployment, and monitoring. Stayup-to-date with the latest AI research, tools, and techniques to continuouslyimprove AI capabilities. Requirements Required Qualifications: Bachelor\u2019sor Master\u2019s degree in Computer Science, AI, Machine Learning, or a relatedfield. Strongprogramming skills in Python, Java Experiencewith AI/ML frameworks such as TensorFlow, PyTorch, Scikit-learn, or Keras. Knowledgeof NLP, Computer Vision, or Reinforcement Learning is a plus. Hands-onexperience with cloud computing (AWS, GCP, Azure) for AI model deployment. Familiaritywith data structures, algorithms, and software development best practices. Experiencewith containerization (Docker, Kubernetes) and APIs for model integration. Strongproblem-solving skills and the ability to work in a fast-paced environment. Experiencewith LLMs (Large Language Models), Generative AI, or AI-powered automation. Understandingof AI ethics, bias mitigation, and responsible AI practices. Knowledgeof big data technologies like Hadoop, Spark, or Kafka. Benefits - Workwith cutting-edge technologies and a talented team. - Competitive salary and benefitspackage. - Flexible work environment withgrowth opportunities. - Access to professional developmentand learning resources. ","

Data Architect

Hubli, Mangaluru, Mysuru, Bengaluru, Belgaum

6 - 9 years

INR 8.0 - 11.0 Lacs P.A.

Work from Office

Full Time

Job_Description":" Data Architect Location: Bangalore Job Type: Full Time Experience Level: Mid-Senior About the Role We are looking for an experiencedData Architect to design and implement scalable, high-performance dataarchitectures. You will be responsible for defining data models, optimizingdata flows, and ensuring data governance across the organization. This rolerequires a deep understanding of database design, cloud data platforms, and bigdata technologies to support analytics and business intelligence. Key Responsibilities Designand implement scalable data architectures that meet business and technicalrequirements. Developdata models, schemas, and metadata management strategies for structured andunstructured data. Defineand enforce data governance, security, and compliance best practices. Workclosely with data engineers, data scientists, and business teams to ensureefficient data pipelines and workflows. Optimizedata storage and retrieval strategies for performance and cost-effectiveness. Architectcloud-based and on-premise data solutions (AWS, Azure, Google Cloud, etc.). Implementdata integration and ETL/ELT processes using modern data platforms. Evaluateand recommend data management technologies, tools, and frameworks. Ensuredata quality and integrity through robust data validation, monitoring, anderror-handling mechanisms. Supportadvanced analytics, AI/ML workloads, and real-time data processing initiatives. Requirements Required Skills & Qualifications Bachelor\u2019s / Master\u2019sdegree in Computer Science, Data Science, or a related field. Extensiveexperience with relational and NoSQL databases (PostgreSQL, MySQL, MongoDB,Cassandra, etc.). Expertisein data modeling, data warehousing, and data lake architectures. Strongknowledge of big data technologies (Hadoop, Spark, Kafka, Snowflake, Redshift,BigQuery, etc.). Experiencewith ETL/ELT tools (Informatica, Talend, DBT, Apache Nifi, etc.). Hands-onexperience with cloud data platforms (AWS, Azure, GCP) and their data services. Proficiencyin SQL, Python, or Scala for data processing and scripting. Understandingof data security, privacy, and regulatory requirements (GDPR, HIPAA, CCPA,etc.). Stronganalytical, problem-solving, and leadership skills. Awareof Data Governance Preferred Qualifications Certificationsin cloud data services (AWS Certified Data Analytics, Google Professional DataEngineer, etc.). Experiencewith real-time data processing and streaming (Kafka, Flink, Spark Streaming). Familiaritywith machine learning and AI-driven data architectures. Knowledgeof CI/CD and DevOps for data workflows. Benefits - Workwith cutting-edge technologies and a talented team. - Competitive salary and benefitspackage. - Flexible work environment withgrowth opportunities. - Access to professional developmentand learning resources. ","

Master Data Specialist - Reltio

India

8 years

None Not disclosed

Remote

Full Time

Role: Senior Reltio MDM Developer Experience Required: 5–8 years Location: Remote Job Summary: We are looking for a Reltio MDM expert with 5–8 years of overall experience in MDM and at least 3+ years of hands-on expertise in Reltio. The ideal candidate will have deep knowledge of data modeling, data integration, and master data governance, and will be responsible for delivering scalable and efficient MDM solutions using the Reltio cloud-native platform. Key Responsibilities: · Lead end-to-end implementation and configuration of Reltio MDM solutions for master data domains (Customer, Product, Vendor, etc.). · Design and configure complex data models, survivorship rules, match/merge logic, and workflow processes in Reltio. · Develop and manage integration of Reltio with enterprise systems (ERP, CRM, Data Lake, etc.) using REST APIs, ETL tools, or middleware. · Design scalable and secure data pipelines for ingesting, cleansing, and enriching master data. · Collaborate with data stewards, analysts, architects, and business stakeholders to ensure effective data governance and quality. · Monitor and optimize the performance and health of the Reltio environment. · Support production deployments, incident resolution, and ongoing enhancements. · Document technical design, data flows, and configuration for future reference. Required Skills and Experience: · 5–8 years of overall experience in Data Management / MDM implementations . · Minimum 3 years of recent hands-on experience with Reltio MDM platform. · Strong understanding of MDM concepts such as hierarchy management, relationship management, survivorship, match/merge logic, data lineage, and stewardship. · Proficient in Reltio L3 configuration, UI Modeler, Graph model design, and Reltio workflows. · Experience with Reltio APIs, JSON, RESTful services, and integration patterns . · Strong SQL skills and experience with data profiling, cleansing, and enrichment techniques. · Hands-on experience integrating Reltio with enterprise systems (SAP, Salesforce, Workday, etc.) or cloud services. · Good understanding of cloud infrastructure (AWS/GCP/Azure). Preferred Qualifications: · Reltio Certified Developer or Architect (preferred). · Exposure to other MDM tools (Informatica MDM, IBM MDM, etc.) is a plus. · Knowledge of data governance tools or frameworks. · Strong communication and stakeholder management skills. · Experience working in Agile/Scrum environments. Nice to Have: · Experience with Reltio Connected Data Platform features like Data Loader, Reltio IQ, or Reltio Integration Hub. · Familiarity with industry-specific data domains and compliance requirements (e.g., HIPAA, GDPR, etc.). Email - staffing@aptusdatalabs.com / amita.kachhap@aptusdatalabs.com

Boomi - Application Integration Specialist

India

8 years

None Not disclosed

Remote

Full Time

Role- Boomi Application Integration Developer Experience: 4–8 years Location: Remote Job Type: Full-time Job Description: We are looking for an experienced Boomi Integration Developer to design, build, and manage integration solutions using the Dell Boomi platform. The ideal candidate will have hands-on experience with application integration, middleware, and API management using Boomi. Key Responsibilities: · Design and develop integration processes using Boomi AtomSphere. · Integrate cloud and on-premise applications using Boomi connectors. · Develop custom connectors and implement API-led integrations. · Troubleshoot and maintain existing integrations. · Manage deployment and lifecycle of Atoms, Molecules, and APIs. · Work with cross-functional teams to understand business requirements and convert them into scalable solutions. Required Skills: · 3–5+ years of hands-on experience with Dell Boomi. · Strong understanding of integration patterns, ETL processes, API development, and web services (REST/SOAP). · Experience with Boomi AtomSphere, Boomi Process Reporting, and API Management. · Proficiency with XML, JSON, XSLT, Groovy, and JavaScript in Boomi processes . · Knowledge of error handling, logging, and retries in integration scenarios. · Familiarity with data protocols: FTP/SFTP, HTTP, JDBC, JMS, etc. Preferred Qualifications: · Experience integrating with Salesforce, SAP, Workday, NetSuite, or Oracle. · Exposure to CI/CD pipelines and version control systems (like Git). · Understanding of data security, governance, and compliance.

Aws Devops and Infra Lead

Bengaluru, Karnataka

0 - 7 years

None Not disclosed

Remote

Full Time

Job Information Number of Positions 1 Date Opened 06/20/2025 Job Type Full time Industry Technology Work Experience 5-8 years Last Activity Time 07/07/2025 17:19 City bangalore State/Province Karnataka Country India Zip/Postal Code 560002 Job Description Role- AWS Devops & Infra Lead Experience - 5-7 yrs Location - Remote (preferred Bangalore) Employment Type - Full Time We are seeking an experienced AWS DevOps & Infrastructure Lead with 5–7 years of hands-on experience in building and managing cloud infrastructure, automation, and DevOps pipelines, along with AWS service enablement expertise. In this role, you will lead infrastructure design and delivery while also enabling teams to effectively leverage AWS services across the organization. - Key Responsibilities: Design, implement, and manage secure, scalable, and resilient AWS infrastructure. Drive AWS service enablement by onboarding, configuring, and managing AWS services (EC2, S3, RDS, Lambda, EKS, API Gateway, CloudFront, etc.) for internal product teams. Build and maintain robust Infrastructure as Code (IaC) using Terraform, CloudFormation, or AWS CDK . Develop and manage CI/CD pipelines with tools such as Jenkins, GitHub Actions, or AWS CodePipeline. Support teams in selecting and integrating the right AWS services for their use cases and workloads. Provide governance, guardrails, and templates to help teams deploy AWS services securely and efficiently. Implement centralized logging, monitoring, and alerting using CloudWatch, ELK, Grafana, or similar. Ensure compliance with security, backup, DR, and cost-optimization practices. Collaborate with Security and Application teams to enforce DevSecOps and cloud governance policies. Mentor junior DevOps engineers and lead by example on automation and cloud best practices - Required Skills & Experience: 5–7 years of experience in DevOps and AWS cloud infrastructure role s. Proven expertise in AWS service enablement and lifecycle managemen t. Strong hands-on experience with Terraform and/or CloudFormation . Proficiency with CI/CD tools (Jenkins, GitHub Actions, GitLab CI, etc.). Deep knowledge of core AWS services and best practices around VPC, IAM, EC2, S3, RDS, CloudTrail, etc. Solid experience with Docker, Kubernetes/EKS. Experience in cost monitoring, optimization, and usage governance on AWS. Strong scripting skills (Shell, Python, or similar) and experience with Linux-based systems. Role- AWS Devops & Infra Lead Experience - 5-7 yrs Location - Remote (preferred Bangalore) Employment Type - Full Time We are seeking an experienced AWS DevOps & Infrastructure Lead with 5–7 years of hands-on experience in building and managing cloud infrastructure, automation, and DevOps pipelines, along with AWS service enablement expertise. In this role, you will lead infrastructure design and delivery while also enabling teams to effectively leverage AWS services across the organization. - Key Responsibilities: Design, implement, and manage secure, scalable, and resilient AWS infrastructure. Drive AWS service enablement by onboarding, configuring, and managing AWS services (EC2, S3, RDS, Lambda, EKS, API Gateway, CloudFront, etc.) for internal product teams. Build and maintain robust Infrastructure as Code (IaC) using Terraform, CloudFormation, or AWS CDK . Develop and manage CI/CD pipelines with tools such as Jenkins, GitHub Actions, or AWS CodePipeline. Support teams in selecting and integrating the right AWS services for their use cases and workloads. Provide governance, guardrails, and templates to help teams deploy AWS services securely and efficiently. Implement centralized logging, monitoring, and alerting using CloudWatch, ELK, Grafana, or similar. Ensure compliance with security, backup, DR, and cost-optimization practices. Collaborate with Security and Application teams to enforce DevSecOps and cloud governance policies. Mentor junior DevOps engineers and lead by example on automation and cloud best practices - Required Skills & Experience: 5–7 years of experience in DevOps and AWS cloud infrastructure role s. Proven expertise in AWS service enablement and lifecycle managemen t. Strong hands-on experience with Terraform and/or CloudFormation . Proficiency with CI/CD tools (Jenkins, GitHub Actions, GitLab CI, etc.). Deep knowledge of core AWS services and best practices around VPC, IAM, EC2, S3, RDS, CloudTrail, etc. Solid experience with Docker, Kubernetes/EKS. Experience in cost monitoring, optimization, and usage governance on AWS. Strong scripting skills (Shell, Python, or similar) and experience with Linux-based systems.

Aws Devops and Infra Lead

Hubli, Mangaluru, Mysuru, Bengaluru, Belgaum

5 - 7 years

INR 7.0 - 9.0 Lacs P.A.

Work from Office

Full Time

Job_Description":" Role-AWS Devops & Infra Lead Experience - 5-7 yrs Location - Remote (preferred Bangalore) Employment Type - Full Time We areseeking an experienced AWS DevOps & Infrastructure Lead with5\u20137 years of hands-on experience in building and managing cloud infrastructure,automation, and DevOps pipelines, along with AWS service enablement expertise.In this role, you will lead infrastructure design and delivery while alsoenabling teams to effectively leverage AWS services across the organization. --- Key Responsibilities: Design, implement, and manage secure, scalable, and resilient AWS infrastructure. Drive AWS service enablement by onboarding, configuring, and managing AWS services (EC2, S3, RDS, Lambda, EKS, API Gateway, CloudFront, etc.) for internal product teams. Build and maintain robust Infrastructure as Code (IaC) using Terraform, CloudFormation, or AWS CDK . Develop and manage CI/CD pipelines with tools such as Jenkins, GitHub Actions, or AWS CodePipeline. Support teams in selecting and integrating the right AWS services for their use cases and workloads. Provide governance, guardrails, and templates to help teams deploy AWS services securely and efficiently. Implement centralized logging, monitoring, and alerting using CloudWatch, ELK, Grafana, or similar. Ensure compliance with security, backup, DR, and cost-optimization practices. Collaborate with Security and Application teams to enforce DevSecOps and cloud governance policies. Mentor junior DevOps engineers and lead by example on automation and cloud best practices --- Required Skills & Experience: 5\u20137 years of experience in DevOps and AWS cloud infrastructure role s. Proven expertise in AWS service enablement and lifecycle managemen t. Strong hands-on experience with Terraform and/or CloudFormation . Proficiency with CI/CD tools (Jenkins, GitHub Actions, GitLab CI, etc.). Deep knowledge of core AWS services and best practices around VPC, IAM, EC2, S3, RDS, CloudTrail, etc. Solid experience with Docker, Kubernetes/EKS. Experience in cost monitoring, optimization, and usage governance on AWS. Strong scripting skills (Shell, Python, or similar) and experience with Linux-based systems. Role-AWS Devops & Infra Lead Experience - 5-7 yrs Location - Remote (preferred Bangalore) Employment Type - Full Time We areseeking an experienced AWS DevOps & Infrastructure Lead with5\u20137 years of hands-on experience in building and managing cloud infrastructure,automation, and DevOps pipelines, along with AWS service enablement expertise.In this role, you will lead infrastructure design and delivery while alsoenabling teams to effectively leverage AWS services across the organization. --- Key Responsibilities: Design, implement, and manage secure, scalable, and resilient AWS infrastructure. Drive AWS service enablement by onboarding, configuring, and managing AWS services (EC2, S3, RDS, Lambda, EKS, API Gateway, CloudFront, etc.) for internal product teams. Build and maintain robust Infrastructure as Code (IaC) using Terraform, CloudFormation, or AWS CDK . Develop and manage CI/CD pipelines with tools such as Jenkins, GitHub Actions, or AWS CodePipeline. Support teams in selecting and integrating the right AWS services for their use cases and workloads. Provide governance, guardrails, and templates to help teams deploy AWS services securely and efficiently. Implement centralized logging, monitoring, and alerting using CloudWatch, ELK, Grafana, or similar. Ensure compliance with security, backup, DR, and cost-optimization practices. Collaborate with Security and Application teams to enforce DevSecOps and cloud governance policies. Mentor junior DevOps engineers and lead by example on automation and cloud best practices --- Required Skills & Experience: 5\u20137 years of experience in DevOps and AWS cloud infrastructure role s. Proven expertise in AWS service enablement and lifecycle managemen t. Strong hands-on experience with Terraform and/or CloudFormation . Proficiency with CI/CD tools (Jenkins, GitHub Actions, GitLab CI, etc.). Deep knowledge of core AWS services and best practices around VPC, IAM, EC2, S3, RDS, CloudTrail, etc. Solid experience with Docker, Kubernetes/EKS. Experience in cost monitoring, optimization, and usage governance on AWS. Strong scripting skills (Shell, Python, or similar) and experience with Linux-based systems. ","

Sr Data Engineer-AWS

Bengaluru, Karnataka

0 - 8 years

None Not disclosed

Remote

Full Time

Job Information Number of Positions 1 Date Opened 06/20/2025 Job Type Full time Industry IT Services Work Experience 5-8 years Last Activity Time 07/13/2025 11:32 City bangalore State/Province Karnataka Country India Zip/Postal Code 560002 Job Description Job Title: Senior Data Engineer Location: Remote Experience: 5–8 Years Employment Type: Full-Time About the Role Aptus Data Labs is looking for a talented and proactive Senior Data Engineer to help build the backbone of our enterprise data and AI initiatives. You’ll work on modern data lake architectures and high-performance pipelines in AWS, enabling real-time insights and scalable analytics. This role reports to the Head – Data Platform and AI Lead , offering a unique opportunity to be part of a cross-functional team shaping the future of data-driven innovation. Key Responsibilities Data Engineering & Pipeline Development Design and develop reliable, reusable ETL/ELT pipelines using AWS Glue, Python, and Spark. Process structured and semi-structured data (e.g., JSON, Parquet, CSV) efficiently for analytics and AI workloads. Build automation and orchestration workflows using Airflow or AWS Step Functions. Data Lake Architecture & Integration Implement AWS-native data lake/lakehouse architectures using S3, Redshift, Glue Catalog, and Lake Formation. Consolidate data from APIs, on-prem systems, and third-party sources into a centralized platform. Optimize data models and partitioning strategies for high-performance queries. Security, IAM & Governance Support Ensure secure data architecture practices across AWS components using encryption, access control, and policy enforcement. Implement and manage AWS IAM roles and policies to control data access across services and users. Collaborate with platform and security teams to maintain compliance and audit readiness (e.g., HIPAA, GxP). Apply best practices in data security, privacy, and identity management in cloud environments. DevOps & Observability Automate deployment of data infrastructure using CI/CD pipelines (GitHub Actions, Jenkins, or AWS CodePipeline). Create Docker-based containers and manage workloads using ECS or EKS. Monitor pipeline health, failures, and performance using CloudWatch and custom logs. Collaboration & Communication Partner with the Data Platform Lead and AI Lead to align engineering efforts with AI product goals. Engage with analysts, data scientists, and business teams to gather requirements and deliver data assets. Contribute to documentation, code reviews, and architectural discussions with clarity and confidence. Required Qualifications Bachelor’s degree in Computer Science, Engineering, or equivalent. 5–8 years of experience in data engineering, preferably in AWS cloud environments. Proficient in Python, SQL, and AWS services: Glue, Redshift, S3, IAM, Lake Formation. Experience managing IAM roles, security policies, and cloud-based data access controls. Hands-on experience with orchestration tools like Airflow or AWS Step Functions. Exposure to CI/CD practices and infrastructure automation. Strong interpersonal and communication skills—able to convey technical ideas clearly. Preferred Additional Skills Proficiency in Databricks , Unity Catalog , and Spark-based distributed data processing . Background in Pharma, Life Sciences, or other regulated environments (GxP, HIPAA). Experience with EMR, Snowflake, or hybrid-cloud data platforms. Experience with BI/reporting tools such as Power BI or QuickSight. Knowledge of integration tools (Boomi, Kafka) or real-time streaming frameworks. Ready to build data solutions that fuel AI innovation? Join Aptus Data Labs and play a key role in transforming raw data into enterprise intelligence.

Lead AI Engineer

Bengaluru, Karnataka

0 - 8 years

None Not disclosed

On-site

Full Time

Job Information Number of Positions 2 Date Opened 07/13/2025 Job Type Full time Industry Technology Work Experience 5-8 years Last Activity Time 07/13/2025 13:45 City Bangalore State/Province Karnataka Country India Zip/Postal Code 560002 Job Description Lead AI Engineer We are seeking a passionate and driven AI engineer with a proven track record of building and deploying cutting- edge AI solutions. As a leader you will be a driving force behind our AI strategy, providing technical guidance, fostering innovation and ensuring the successful delivery of AI projects. Responsibilities Designing and implementing AI and machine learning models to solve business problems Collaborating with cross-functional teams to understand requirements and deliver tailored AI solutions Conducting research and prototyping new AI algorithms and techniques Developing and maintaining AI models to improve performance and accuracy Analyzing and interpreting complex data sets to extract meaningful insights Documenting and presenting technical solutions to both technical and non-technical stakeholders Keeping up-to-date with the latest advancements in AI and machine learning technologies Requirements Requirements Bachelor’s or Master's Degree in Computer Science or other related fields. Strong technical & analytical skill (Python OR R, SQL, predictive modelling, NLP/AI, Cloud, Generative AI, MLops, AIops etc) Experience with Large Language Models like OpenAI API, ChatGPT, GPT-4, Bard and Langchain, HuggingFace Transformers etc Proven experience in developing and implementing Gen AI algorithms and models Solid background in machine learning, deep learning, and natural language processing Experience with AI frameworks such as TensorFlow, PyTorch, or Keras Proficient in handling and analyzing large datasets using SQL or NoSQL databases Strong programming skills in Python. Experience in model deployment & operation using CI/CD and cloud platforms (AWS, Azure,) Strong problem-solving and analytical skills Excellent communication and teamwork abilities Client Interaction Project Handling experience

Sr Data Engineer- Databricks

Bengaluru, Karnataka

0 - 8 years

None Not disclosed

Remote

Full Time

Job Information Number of Positions 2 Date Opened 07/13/2025 Job Type Full time Industry IT Services Work Experience 5-8 years Last Activity Time 07/13/2025 13:28 City Bangalore State/Province Karnataka Country India Zip/Postal Code 560002 Job Description Exp - 5+ yrs Location - Remote (Preferred candidates to be in bangalore) Notice- Looking candidates with to be joining within 30 Days Key Responsibilities: Design, implement, and optimize scalable data pipelines using Databricks and Apache Spark. Architect data lakes using Delta Lake, ensuring reliable and efficient data storage. Manage metadata, security, and lineage through Unity Catalog for governance and compliance. Ingest and process streaming data using Apache Kafka and real-time frameworks. Collaborate with ML engineers and data scientists on LLM-based AI/GenAI project pipelines. Apply CI/CD and DevOps practices to automate data workflows and deployments (e.g., with GitHub Actions, Jenkins, Terraform). Optimize query performance and data transformations using advanced SQL. Implement and uphold data governance, quality, and access control policies. Support production data pipelines and respond to issues and performance bottlenecks. Contribute to architectural decisions around data strategy and platform scalability. Requirements Required Skills & Experience: 5+ years of experience in data engineering roles. Proven expertise in Databricks, Delta Lake, and Apache Spark (PySpark preferred). Deep understanding of Unity Catalog for fine-grained data governance and lineage tracking. Proficiency in SQL for large-scale data manipulation and analysis . Hands-on experience with Kafka for real-time data streaming. Solid understanding of CI/CD, infrastructure automation, and DevOps principles . Experience contributing to or supporting Generative AI / LLM projects with structured or unstructured data. Familiarity with cloud platforms (AWS, Azure, or GCP) and d ata services. Strong problem-solving, debugging, and system design skills. Excellent communication and collaboration abilities in cross-functional teams.

Sr Data Engineer-AWS

Hubli, Mangaluru, Mysuru, Bengaluru, Belgaum

4 - 7 years

INR 4.0 - 9.0 Lacs P.A.

Work from Office

Full Time

Job_Description":" Job Title: Senior Data Engineer Location: Remote Experience: 5\u20138 Years Employment Type: Full-Time About the Role Aptus Data Labs is looking for a talented andproactive Senior Data Engineer to help build the backbone of ourenterprise data and AI initiatives. You\u2019ll work on modern data lakearchitectures and high-performance pipelines in AWS, enabling real-timeinsights and scalable analytics. This role reports to the Head \u2013 Data Platform and AILead , offering a unique opportunity to be part of a cross-functional teamshaping the future of data-driven innovation. Key Responsibilities Data Engineering & Pipeline Development Design and develop reliable, reusable ETL/ELT pipelines using AWS Glue, Python, and Spark. Process structured and semi-structured data (e.g., JSON, Parquet, CSV) efficiently for analytics and AI workloads. Build automation and orchestration workflows using Airflow or AWS Step Functions. Data Lake Architecture & Integration Implement AWS-native data lake/lakehouse architectures using S3, Redshift, Glue Catalog, and Lake Formation. Consolidate data from APIs, on-prem systems, and third-party sources into a centralized platform. Optimize data models and partitioning strategies for high-performance queries. Security, IAM & Governance Support Ensure secure data architecture practices across AWS components using encryption, access control, and policy enforcement. Implement and manage AWS IAM roles and policies to control data access across services and users. Collaborate with platform and security teams to maintain compliance and audit readiness (e.g., HIPAA, GxP). Apply best practices in data security, privacy, and identity management in cloud environments. DevOps & Observability Automate deployment of data infrastructure using CI/CD pipelines (GitHub Actions, Jenkins, or AWS CodePipeline). Create Docker-based containers and manage workloads using ECS or EKS. Monitor pipeline health, failures, and performance using CloudWatch and custom logs. Collaboration & Communication Partner with the Data Platform Lead and AI Lead to align engineering efforts with AI product goals. Engage with analysts, data scientists, and business teams to gather requirements and deliver data assets. Contribute to documentation, code reviews, and architectural discussions with clarity and confidence. Required Qualifications Bachelor\u2019s degree in Computer Science, Engineering, or equivalent. 5\u20138 years of experience in data engineering, preferably in AWS cloud environments. Proficient in Python, SQL, and AWS services: Glue, Redshift, S3, IAM, Lake Formation. Experience managing IAM roles, security policies, and cloud-based data access controls. Hands-on experience with orchestration tools like Airflow or AWS Step Functions. Exposure to CI/CD practices and infrastructure automation. Strong interpersonal and communication skills\u2014able to convey technical ideas clearly. Preferred Additional Skills Proficiency in Databricks , Unity Catalog , and Spark-based distributed data processing . Background in Pharma, Life Sciences, or other regulated environments (GxP, HIPAA). Experience with EMR, Snowflake, or hybrid-cloud data platforms. Experience with BI/reporting tools such as Power BI or QuickSight. Knowledge of integration tools (Boomi, Kafka) or real-time streaming frameworks. Ready to build data solutions that fuel AI innovation? Join Aptus Data Labs and play a key role in transformingraw data into enterprise intelligence. ","

Sr Data Engineer- Databricks

Hubli, Mangaluru, Mysuru, Bengaluru, Belgaum

5 - 10 years

INR 25.0 - 27.5 Lacs P.A.

Work from Office

Full Time

Job_Description":" Exp - 5+ yrs Location - Remote (Preferred candidates to be in bangalore) Notice- Looking candidates with to be joining within 30 Days Key Responsibilities: Design, implement, and optimize scalable data pipelines using Databricks and Apache Spark. Architect data lakes using Delta Lake, ensuring reliable and efficient data storage. Manage metadata, security, and lineage through Unity Catalog for governance and compliance. Ingest and process streaming data using Apache Kafka and real-time frameworks. Collaborate with ML engineers and data scientists on LLM-based AI/GenAI project pipelines. Apply CI/CD and DevOps practices to automate data workflows and deployments (e.g., with GitHub Actions, Jenkins, Terraform). Optimize query performance and data transformations using advanced SQL. Implement and uphold data governance, quality, and access control policies. Support production data pipelines and respond to issues and performance bottlenecks. Contribute to architectural decisions around data strategy and platform scalability. Requirements Required Skills & Experience: 5+ years of experience in data engineering roles. Proven expertise in Databricks, Delta Lake, and Apache Spark (PySpark preferred). Deep understanding of Unity Catalog for fine-grained data governance and lineage tracking. Proficiency in SQL for large-scale data manipulation and analysis . Hands-on experience with Kafka for real-time data streaming. Solid understanding of CI/CD, infrastructure automation, and DevOps principles . Experience contributing to or supporting Generative AI / LLM projects with structured or unstructured data. Familiarity with cloud platforms (AWS, Azure, or GCP) and d ata services. Strong problem-solving, debugging, and system design skills. Excellent communication and collaboration abilities in cross-functional teams. ","

Lead AI Engineer

Hubli, Mangaluru, Mysuru, Bengaluru, Belgaum

4 - 7 years

INR 10.0 - 15.0 Lacs P.A.

Work from Office

Full Time

Job_Description":" Lead AI Engineer We are seeking a passionate and driven AI engineer with a proven trackrecord of building and deploying cutting- edge AI solutions. As a leader youwill be a driving force behind our AI strategy, providing technical guidance,fostering innovation and ensuring the successful delivery of AI projects. Responsibilities Designing and implementing AI and machinelearning models to solve business problems Collaborating with cross-functional teams tounderstand requirements and deliver tailored AI solutions Conducting research and prototyping new AIalgorithms and techniques Developing and maintaining AI models to improveperformance and accuracy Analyzing and interpreting complex data sets toextract meaningful insights Documenting and presenting technical solutionsto both technical and non-technical stakeholders Keeping up-to-date with the latest advancementsin AI and machine learning technologies Requirements Requirements Bachelor\u2019s or Masters Degree in ComputerScience or other related fields. Strong technical & analytical skill (PythonOR R, SQL, predictive modelling, NLP/AI, Cloud, Generative AI, MLops, AIopsetc) Experience with Large Language Models likeOpenAI API, ChatGPT, GPT-4, Bard and Langchain, HuggingFace Transformers etc Proven experience in developing and implementingGen AI algorithms and models Solid background in machine learning, deeplearning, and natural language processing Experience with AI frameworks such asTensorFlow, PyTorch, or Keras Proficient in handling and analyzing largedatasets using SQL or NoSQL databases Strong programming skills in Python. Experience in model deployment & operationusing CI/CD and cloud platforms (AWS, Azure,) Strong problem-solving and analytical skills Excellent communication and teamwork abilities Client Interaction Project Handling experience ","

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Job Titles Overview