Job Title: Senior Machine Learning Engineer Location: On Site / [Gurgaon, India] Experience: 5+ Years Type: Full-time / Contract About the Role We are looking for an experienced Machine Learning Engineer with a strong background in building, deploying, and scaling ML models in production environments. You will work closely with Data Scientists, Engineers, and Product teams to translate business challenges into data-driven solutions and build robust, scalable ML pipelines. This is a hands-on role requiring a blend of applied machine learning, data engineering, and software development skills. Key Responsibilities Design, build, and deploy machine learning models to solve real-world business problems Work on the end-to-end ML lifecycle: data preprocessing, feature engineering, model selection, training, evaluation, deployment, and monitoring Collaborate with cross-functional teams to identify opportunities for machine learning across products and workflows Develop and optimize scalable data pipelines to support model development and inference Implement model retraining, versioning, and performance tracking in production Ensure models are interpretable, explainable, and aligned with fairness, ethics, and compliance standards Continuously evaluate new ML techniques and tools to improve accuracy and efficiency Document processes, experiments, and findings for reproducibility and team knowledge-sharing Requirements 5+ years of hands-on experience in machine learning, applied data science, or related roles Strong foundation in ML algorithms (regression, classification, clustering, NLP, time series, etc.) Experience with production-level ML deployment using tools like MLflow, Kubeflow, Airflow, FastAPI , or similar Proficiency in Python and libraries like scikit-learn, TensorFlow, PyTorch, XGBoost, pandas, NumPy Experience with cloud platforms (AWS, GCP, or Azure) and containerized environments (Docker, Kubernetes) Strong understanding of software engineering principles and experience with Git, CI/CD, and version control Experience with large datasets, distributed systems (Spark/Databricks), and SQL/NoSQL databases Excellent problem-solving, communication, and collaboration skills Nice to Have Experience with LLMs, Generative AI , or transformer-based models Familiarity with MLOps best practices and infrastructure as code (e.g., Terraform) Experience working in regulated industries (e.g., finance, healthcare) Contributions to open-source projects or ML research papers Why Join Us Work on impactful problems with cutting-edge ML technologies Collaborate with a diverse, expert team across engineering, data, and product Flexible working hours and remote-first culture Opportunities for continuous learning, mentorship, and growth Show more Show less
Job Title: Solution Architect Locations: Gurugram, Hyderabad, Bengaluru Work Mode: Hybrid | Remote flexibility available Experience: 8+ years Industry: Fintech Employment Type: Full-time About the Role We are hiring a Solution Architect for a high-growth Fintech client. This role is ideal for someone who can translate complex business requirements into scalable, secure, and high-performance technical solutions. You will work closely with product, engineering, and business teams to define architecture strategies and lead their implementation. Key Responsibilities Architect and design scalable, secure, and resilient financial systems Translate business and product requirements into architectural blueprints Collaborate with engineering teams to ensure delivery aligns with defined architecture Guide development teams through design, implementation, and integration processes Lead technical evaluations, architecture reviews, and proof-of-concept initiatives Ensure compliance with performance, security, and regulatory requirements Stay updated on emerging technologies and apply them where relevant Requirements 8+ years of experience in software development, with 3+ years in a solution or technical architect role Expertise in microservices, API design, and distributed system architecture Strong programming skills in Java, Python, Node.js, or similar languages Proven experience with cloud platforms (AWS, Azure, or GCP) Solid understanding of DevOps, CI/CD, containers, and infrastructure as code Familiarity with architectural patterns, security best practices, and performance optimization Experience in Fintech or financial services environments is preferred Strong communication and stakeholder collaboration skills Preferred Qualifications Cloud certifications (e.g., AWS Solutions Architect, Azure Architect) Exposure to event-driven architecture and domain-driven design Background in payments, banking, lending, or digital finance platforms Work Mode: Hybrid from Gurugram, Hyderabad, or Bengaluru Remote candidates will be considered based on experience and alignment with client needs
Senior Data Engineer | Gurugram (Onsite) 5+ yrs exp with Azure Data Services, Databricks, PySpark, SQL & Soda. Build scalable data pipelines, ensure data quality, support governance, CI/CD, mentor juniors, Kafka & Airflow exposure is a plus.
Design and optimize scalable data pipelines using Python, Scala, and SQL. Work with AWS services, Redshift, Terraform, Docker, and Jenkins. Implement CI/CD, manage infrastructure as code, and ensure efficient data flow across systems.
Responsibilities: * Design, develop & maintain data pipelines using Python, Terraform & SQL * Collaborate with cross-functional teams on ETL processes * Optimize database performance through MySQL tuning
Responsibilities: * Design, develop & maintain data pipelines using Python, AWS & SQL. * Collaborate with cross-functional teams on ETL projects. * Optimize performance & scalability of data systems.
Job Title: Lead Platform Engineer – AWS Data Platform Location: Hybrid – Hyderabad, Telangana Experience: 10+ years Employment Type: Full-Time Apply Now --- About the Role Infoslab is hiring on behalf of our client, a leading healthcare technology company committed to transforming healthcare through data. We are seeking a Lead Platform Engineer to architect, implement, and lead the development of a secure, scalable, and cloud-native data platform on AWS. This role combines deep technical expertise with leadership responsibilities. You will build the foundation that supports critical business intelligence, analytics, and machine learning applications across the organization. --- Key Responsibilities Architect and build a highly available, cloud-native data platform using AWS services such as S3, Glue, Redshift, Lambda, and ECS. Design reusable platform components and frameworks to support data engineering, analytics, and ML pipelines. Build and maintain CI/CD pipelines, GitOps workflows, and infrastructure-as-code using Terraform. Drive observability, operational monitoring, and incident response processes across environments. Ensure platform security, compliance (HIPAA, SOC2), and audit-readiness in partnership with InfoSec. Lead and mentor a team of platform engineers, promoting best practices in DevOps and cloud infrastructure. Collaborate with cross-functional teams to deliver reliable and scalable data platform capabilities. --- Required Skills and Experience 10+ years of experience in platform engineering, DevOps, or infrastructure roles with a data focus. 3+ years in technical leadership or platform engineering management. Deep experience with AWS services, including S3, Glue, Redshift, Lambda, ECS, and Athena. Strong hands-on experience with Python or Scala, and automation tooling. Proficient in Terraform and CI/CD tools (GitHub Actions, Jenkins, etc.). Advanced knowledge of Apache Spark for both batch and streaming workloads. Proven track record of building secure, scalable, and compliant infrastructure. Strong understanding of observability, reliability engineering, and infrastructure automation. --- Preferred Qualifications Experience with containerization and orchestration (Docker, Kubernetes). Familiarity with Data Mesh principles or domain-driven data platform design. Background in healthcare or other regulated industries. Experience integrating data platforms with BI tools like Tableau or Looker. --- Why Join Contribute to a mission-driven client transforming healthcare through intelligent data platforms. Lead high-impact platform initiatives that support diagnostics, research, and machine learning. Work with modern engineering practices including IaC, GitOps, and serverless architectures. Be part of a collaborative, hybrid work culture focused on innovation and technical excellence.
Job Title: AWS Platform Data Engineer Location: Hyderabad (Onsite/Hybrid) Experience: 6+ Years Employment Type: Full-Time About the Role: Data Platform Engineer We are hiring Data Platform Engineers for our Hyderabad Office (Hybrid Mode). Role: Data Platform Engineer | . Loc.: Hyderabad. Work Mode: Hybrid (3 days a week). 6 to 8 years of exp.. immediate joiner. For a Healthcare Data Analytics client. Must Have Hands-on Skills Proficient with Python Programming, Scala, SQL, Redshift AWS : Experience with the AWS ecosystem Terraform : Good Knowledge of Infrastructure as a code using Terraform Jenkins : CI/CD pipelines using Jenkins Docker : Containerization with Docker Key Responsibilities Design, develop, and maintain robust data pipelines and platform components. Collaborate with cross-functional teams to ensure seamless data integration and availability. Implement infrastructure as code using Terraform. Manage containerized environments using Docker. Set up and maintain CI/CD pipelines with Jenkins. Optimize performance and scalability of data systems on AWS. Write clean, maintainable, and efficient code in Python and Scala. Why Join Us Work on cutting-edge data infrastructure that powers real-world analytics and ML initiatives. Collaborate in a fast-paced, innovation-driven environment. Get exposure to modern DevOps, IaC, and data engineering practices at scale. Flexible working model with strong career growth and learning opportunities.
6+ years of hands-on experience in data engineering or platform engineering roles. Strong coding skills in Python, Scala, and SQL. Expertise in AWS data ecosystem: EC2, S3, Glue, Redshift, Lambda, ETL pipelines, data architecture.
Senior Data Engineer: Design, build, & optimize data platforms on AWS. Expertise in Python, Scala, SQL, Redshift, Terraform, Docker, & Jenkins. Skills include Glue, Lambda, Apache Airflow, Spark, & Databricks. Join our Hyderabad team.