Home
Jobs
Companies
Resume

643 Sagemaker Jobs - Page 7

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

7 - 10 Lacs

Hyderābād

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. AWS Data Engineer- Senior We are seeking a highly skilled and motivated Hands on AWS Data Engineer with 5-10 years of experience in AWS Glue, Pyspark ,AWS Redshift, S3, and Python to join our dynamic team. As a Data Engineer, you will be responsible for designing, developing, and optimizing data pipelines and solutions that support business intelligence, analytics, and large-scale data processing. You will work closely with data scientists, analysts, and other engineering teams to ensure seamless data flow across our systems. Technical Skills : Must have Strong experience in AWS Data Services like Glue , Lambda, Even bridge, Kinesis, S3/ EMR , Redshift , RDS, Step functions, Airflow & Pyspark Strong exposure to IAM, Cloud Trail , Cluster optimization , Python & SQL Should have expertise in Data design, STTM, understanding of Data models , Data component design, Automated testing, Code Coverage, UAT support , Deployment and go live Experience with version control systems like SVN, Git. Create and manage AWS Glue crawlers and jobs to automate data cataloging and ingestion processes across various structured and unstructured data sources. Strong experience with AWS Glue building ETL pipelines, managing crawlers, and working with Glue data catalogue. Proficiency in AWS Redshift designing and managing Redshift clusters, writing complex SQL queries, and optimizing query performance. Enable data consumption from reporting and analytics business applications using AWS services (ex: QuickSight, Sagemaker, JDBC / ODBC connectivity, etc.) Behavioural skills: Willing to work 5 days a week from ODC / client location ( based on project can be hybrid 3 days a week ) Ability to Lead developers and engage with client stakeholders to drive technical decisions Ability to do technical design and POCs- help build / analyse logical data model, required entities, relationships, data constraints and dependencies focused on enabling reporting and analytics business use cases Should be able to work in Agile environment Should have strong communication skills Good to have : Exposure to Financial Services , Wealth and Asset Management Exposure to Data science, Exposure to Fullstack technologies GenAI will be an added advantage EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 1 week ago

Apply

5.0 years

3 - 5 Lacs

Hyderābād

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. ML Ops Engineer (Senior Consultant) Key Responsibilities: Lead the design, implementation, and maintenance of scalable ML infrastructure. Collaborate with data scientists to deploy, monitor, and optimize machine learning models. Automate complex data processing workflows and ensure data quality. Optimize and manage cloud resources for cost-effective operations. Develop and maintain robust CI/CD pipelines for ML models. Troubleshoot and resolve advanced issues related to ML infrastructure and deployments. Mentor and guide junior team members, fostering a culture of continuous learning. Work closely with cross-functional teams to understand requirements and deliver innovative solutions. Drive best practices and standards for ML Ops within the organization. Required Skills and Experience: Minimum 5 years of experience in infrastructure engineering. Proficiency in using EMR (Elastic MapReduce) for large-scale data processing. Extensive experience with SageMaker, ECR, S3, Lamba functions, Cloud capabilities and deployment of ML models. Strong proficiency in Python scripting and other programming languages. Experience with CI/CD tools and practices. Solid understanding of the machine learning lifecycle and best practices. Strong problem-solving skills and attention to detail. Excellent communication skills and ability to work collaboratively in a team environment. Demonstrated ability to take ownership and drive projects to completion. Proven experience in leading and mentoring teams. Beneficial Skills and Experience: Experience with containerization and orchestration tools (Docker, Kubernetes). Familiarity with data visualization tools and techniques. Knowledge of big data technologies (Spark, Hadoop). Experience with version control systems (Git). Understanding of data governance and security best practices. Experience with monitoring and logging tools (Prometheus, Grafana). Stakeholder management skills and ability to communicate technical concepts to non-technical audiences. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 1 week ago

Apply

150.0 years

5 - 7 Lacs

Gurgaon

On-site

You are as unique as your background, experience and point of view. Here, you’ll be encouraged, empowered and challenged to be your best self. You'll work with dynamic colleagues - experts in their fields - who are eager to share their knowledge with you. Your leaders will inspire and help you reach your potential and soar to new heights. Every day, you'll have new and exciting opportunities to make life brighter for our Clients - who are at the heart of everything we do. Discover how you can make a difference in the lives of individuals, families and communities around the world. Job Description: Principal Consultant -DevOps Are you ready to shine? At Sun Life, we empower you to be your most brilliant self. Who we are? Sun Life is a leading financial services company with history of 150+ years that helps our clients achieve lifetime financial security and live healthier lives. We serve millions in Canada, the U.S., Asia, the U.K., and other parts of the world. We have a network of Sun Life advisors, third-party partners, and other distributors. Through them, we’re helping set our clients free to live their lives their way, from now through retirement. We’re working hard to support their wellness and health management goals, too. That way, they can enjoy what matters most to them. And that’s anything from running a marathon to helping their grandchildren learn to ride a bike. To do this, we offer a broad range of protection and wealth products and services to individuals, businesses, and institutions, including: Insurance. Life, health, wellness, disability, critical illness, stop-loss, and long-term care insurance. Investments. Mutual funds, segregated funds, annuities, and guaranteed investment products Advice. Financial planning and retirement planning services Asset management. Pooled funds, institutional portfolios, and pension funds With innovative technology, a strong distribution network and long-standing relationships with some of the world’s largest employers, we are today providing financial security to millions of people globally. Sun Life is a leading financial services company that helps our clients achieve lifetime financial security and live healthier lives, with strong insurance, asset management, investments, and financial advice portfolios. At Sun Life, our asset management business draws on the talent and experience of professionals from around the globe. Sun Life Global Solutions (SLGS) Established in the Philippines in 1991 and in India in 2006, Sun Life Global Solutions, (formerly Asia Service Centres), a microcosm of Sun Life, is poised to harness the regions’ potential in a significant way - from India and the Philippines to the world. We are architecting and executing a BOLDER vision: being a Digital and Innovation Hub, shaping the Business, driving Transformation and superior Client experience by providing expert Technology, Business and Knowledge Services and advanced Solutions. We help our clients achieve lifetime financial security and live healthier lives – our core purpose and mission. Drawing on our collaborative and inclusive culture, we are reckoned as a ‘Great Place to Work’, ‘Top 100 Best Places to Work for Women’ and stand among the ‘Top 11 Global Business Services Companies’ across India and the Philippines. The technology function at Sun Life Global Solutions is geared towards growing our existing business, deepening our client understanding, managing new-age technology systems, and demonstrating thought leadership. We are committed to building greater domain expertise and engineering ability, delivering end to end solutions for our clients, and taking a lead in intelligent automation. Tech services at Sun Life Global Solutions have evolved in areas such as application development and management, Support, Testing, Digital, Data Engineering and Analytics, Infrastructure Services and Project Management. We are constantly expanding our strength in Information technology and are looking for fresh talents who can bring ideas and values aligning with our Digital strategy. Our Client Impact strategy is motivated by the need to create an inclusive culture, empowered by highly engaged people. We are entering a new world that focuses on doing purpose driven work. The kind that fills your day with excitement and determination, because when you love what you do, it never feels like work. We want to create an environment where you feel empowered to act and are surrounded by people who challenge you, support you and inspire you to become the best version of yourself. As an employer, we not only want to attract top talent, but we want you to have the best Sun Life Experience. We strive to Shine Together, Make Life Brighter & Shape the Future! What will you do? You will help implement automation, security, and speed of delivery solutions across Sun Life and act as a change agent for the adoption of a DevOps mindset. You will coach and mentor teams, IT leaders and business leaders and create and maintain ongoing learning journeys. You will play a critical role in supporting and guiding DevOps Engineers and technical leaders to ensure that DevOps practices are employed globally. You will act as a role model by demonstrating the right mindset including a test and learn attitude, a bias for action, a passion to innovate and a willingness to learn. You will lead a team of highly skilled and collaborative individuals and will lead new hire on-boarding, talent development, retention, and succession planning. Our engineering career framework helps our engineers to understand the scope, collaborative reach, and levers for impact at every job role and defines the key behaviors and deliverables specific to one’s role and team and plan their career with Sun Life. Your scope of work / key responsibilities: Analyze, investigate, and recommend solutions for continuous improvements, process enhancements, identify pain points, and more efficient workflows. Create templates, standards, and models to facilitate future implementations and adjust priorities when necessary. Demonstrate that you are a collaborative communicator with architects, designers, business system analysts, application analysts, operation teams and testing specialists to deliver fully automated ALM systems. Confidently speaking up, bringing people together, facilitating meetings, recording minutes and actions, and rallying the team towards a common goal Deploy, configure, manage, and perform ongoing maintenance of technical infrastructure including all DevOps tooling used by our Canadian IT squads Set-up and maintain fully automated CI/CD pipeline for multiple Java / .NET environments using tools like Bitbucket, Jenkins, Ansible, Docker etc. Guide development teams with the preparation of releases for production. This may include assisting in the automation of performance tests, validation of infrastructure requirements, and guiding the team with respect to system decisions Create or improve the automated deployment processes, techniques, and tools Troubleshoot and resolve technical operational issues related to IT Infrastructure Review and analyze organizational needs and goals to determine future impacts to applications and systems Ensure information security standards and requirements are incorporated into all solutions Stay current with trends in emerging technologies and how they could apply to Sun Life Key Qualifications and experience: 10+ years of continuous Integration and delivery (CI/CD) experience in a systems development life cycle environment using Bitbucket, Jenkins, CDD, etc. Self sufficient and experienced with either modern programming languages (e.g. Java or C#), or scripting languages such as SageMaker Python, YAML or similar. Working knowledge of SQL, Tableau, Grafana. Advanced knowledge of DevOps with a security and automation mindset Knowledge of using and configuring build tools and orchestration such as Jenkins, SonarQube, Checkmarx, Snyk, Artifactory, Azure DevOps, Docker, Kubernetes, OpenShift, Ansible, Continuous Delivery Director (CDD) Advanced knowledge of deployment (i.e. Ansible, Chef) and containerization (Docker/Kubernetes) tooling IAAS/PAAS/SAAS deployment and operations experience Experience with native mobile development on iOS and/or Android is an asset Experience with source code management tools such as Bitbucket, Git, TFS Technical Credentials: Java/Python , Jenkins , Ansible , Kubernetes ..so on Primary Location: Gurugram/ Bengaluru Schedule: 12:00 PM to 8:30 PM Job Category: IT - Application Development Posting End Date: 29/06/2025

Posted 1 week ago

Apply

3.0 - 5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

OPENTEXT OpenText is a global leader in information management, where innovation, creativity, and collaboration are the key components of our corporate culture. As a member of our team, you will have the opportunity to partner with the most highly regarded companies in the world, tackle complex issues, and contribute to projects that shape the future of digital transformation. Your Impact We are seeking a skilled and experienced Software Engineer with expertise in Large Language Models (LLM), Java, Python, Kubernetes, Helm and cloud technologies like AWS. The ideal candidate will contribute to designing, developing, and maintaining scalable software solutions using microservices architecture. This role offers an exciting opportunity to work with cutting-edge technologies in a collaborative environment. What The Role Offers Design, develop, troubleshoot and debug software programs for software enhancements and new products. Integrate Large Language Models (LLMs) into business applications to enhance functionality and user experience. Develop and maintain transformer-based models. Develop RESTful APIs and ensure seamless integration across services. Collaborate with cross-functional teams to gather requirements and translate them into technical solutions. Implement best practices for cloud-native development using AWS services like EC2, Lambda, SageMaker, S3 etc. Deploy, manage, and scale containerized applications using Kubernetes (K8S) and Helm. Designs enhancements, updates, and programming changes for portions and subsystems of application software, utilities, databases, and Internet-related tools. Analyses design and determines coding, programming, and integration activities required based on general objectives and knowledge of overall architecture of product or solution. Collaborates and communicates with management, internal, and outsourced development partners regarding software systems design status, project progress, and issue resolution. Represents the software systems engineering team for all phases of larger and more-complex development projects. Ensure system reliability, security, and performance through effective monitoring and troubleshooting. Write clean, efficient, and maintainable code following industry standards. Participate in code reviews, mentorship, and knowledge-sharing within the team. What You Need To Succeed Bachelor's or Master's degree in Computer Science, Information Systems, or equivalent. Typically, 3-5 years of experience Strong understanding of Large Language Models (LLM) and experience applying them in real-world applications. Expertise in Elastic Search or similar search and indexing technologies. Expertise in designing and implementing microservices architecture. Solid experience with AWS services like EC2, VPC, ECR, EKS, SageMaker etc. for cloud deployment and management. Proficiency in container orchestration tools such as Kubernetes (K8S) and packaging/deployment tools like Helm. Strong problem-solving skills and the ability to troubleshoot complex issues. Strong experience in Java and Python development, with proficiency in frameworks like Spring Boot or Java EE. Should have good hands-on experience in designing and writing modular object-oriented code. Good knowledge of REST APIs, Spring, Spring boot, Hibernate. Excellent analytical, troubleshooting and problem-solving skills. Ability to demonstrate effective teamwork both within the immediate team and across teams. Experience in working with version control and build tools like GIT, GitLab, Maven and Jenkins, GitLab CI. Excellent communication and collaboration skills. Familiarity with Python for LLM-related tasks. Working knowledge in RAG Experience working with NLP frameworks such as Hugging Face, OpenAI, or similar. Knowledge of database systems like PostgreSQL, MongoDB, or DynamoDB. Experience with observability tools like Prometheus, Grafana, or ELK Stack. Experience in working with event-driven architectures and messaging systems (e.g., Kafka, RabbitMQ). Experience with CI/CD pipelines, DevOps practices, and infrastructure as code (e.g., Terraform, CloudFormation). Familiar with Agile framework/SCRUM development methodologies One Last Thing OpenText is more than just a corporation, it's a global community where trust is foundational, the bar is raised, and outcomes are owned. Join us on our mission to drive positive change through privacy, technology, and collaboration. At OpenText, we don't just have a culture; we have character. Choose us because you want to be part of a company that embraces innovation and empowers its employees to make a difference. OpenText's efforts to build an inclusive work environment go beyond simply complying with applicable laws. Our Employment Equity and Diversity Policy provides direction on maintaining a working environment that is inclusive of everyone, regardless of culture, national origin, race, color, gender, gender identification, sexual orientation, family status, age, veteran status, disability, religion, or other basis protected by applicable laws. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please contact us at hr@opentext.com. Our proactive approach fosters collaboration, innovation, and personal growth, enriching OpenText's vibrant workplace. 46999 Show more Show less

Posted 1 week ago

Apply

0 years

4 - 7 Lacs

Pune

Remote

Infrastructure Engineering As an Infrastructure Engineer with Convera , c looking for motivated and experienced Voice Engineers and professional who are eager to expand their expertise into the dynamic world of Amazon Connect—a cutting-edge, cloud-based contact center solution that offers complete customization with scalable cloud technology. If you're looking to advance your career in software development, AWS, or AI, this is the perfect opportunity to upskill and work on innovative solutions. You will be responsible for: As a Voice Engineer, you will: Implement and optimize Amazon Connect cloud-based contact center solutions, including call and queue flows, agent experience, call recording, metrics analysis, Contact Lens, and CTR data insights. Act as a consultative technology expert, guiding the planning, design, implementation, and maintenance of Amazon Connect architecture. Develop seamless interconnectivity between Amazon Connect services and related applications. Build and integrate applications using AWS services, such as CloudWatch, Kinesis, S3, Lex, and Polly. Design robust software solutions, algorithms, and cloud architectures tailored to product requirements. Participate in all phases of the software development lifecycle, from requirement analysis and technical design to prototyping, coding, testing, deployment, and support. Collaborate with Scrum Masters, QA teams, and developers to ensure agile delivery of projects. Troubleshoot and resolve performance issues and software bugs efficiently. Minimum Qualifications: Expertise in AWS Connect, Amazon Lex, Lambda Integration, S3, DynamoDB, CloudWatch, CloudFormation, IAM, CloudFront, JavaScript, Node.js, and Python (Amazon Connect / Amazon Lex experience is mandatory). Strong background in technical architecture, design, and implementation of Amazon Connect. Hands-on experience with telephony systems, VoIP technologies, and UCaaS solutions like Zoom Phone. Familiarity with contact center technologies, IVR solutions, and automation strategies. Proficiency in modern DevOps tools and techniques, including GitHub, CI/CD pipelines. Knowledge of object-oriented programming languages (Java, C#, C++, Python, Ruby). Experience working with SQL databases and fundamental database concepts. Understanding of AI/ML cloud services such as Amazon SageMaker, Bedrock, and Amazon Queue. Bachelor’s degree in Computer Science or a related field. Strong analytical, problem-solving, and communication skills. Ability to collaborate effectively with globally distributed teams. Preferred Qualifications: Experience working in an Agile DevOps environment. Knowledge of automated provisioning & maintenance in cloud environments. Innovative, self-motivated, and results-driven approach. Ability to thrive under pressure and meet tight deadlines. Location Remote, India(WFH) About Convera Convera is the largest non-bank B2B cross-border payments company in the world. Formerly Western Union Business Solutions, we leverage decades of industry expertise and technology-led payment solutions to deliver smarter money movements to our customers – helping them capture more value with every transaction. Convera serves more than 30,000 customers ranging from small business owners to enterprise treasurers to educational institutions to financial institutions to law firms to NGOs. Our teams care deeply about the value we bring to our customers which makes Convera a rewarding place to work. This is an exciting time for our organization as we build our team with growth-minded, results-oriented people who are looking to move fast in an innovative environment. As a truly global company with employees in over 20 countries, we are passionate about diversity; we seek and celebrate people from different backgrounds, lifestyles, and unique points of view. We want to work with the best people and ensure we foster a aculture of inclusion and belonging. We offer an abundance of competitive perks and benefits including: Competitive salary Opportunity to earn an annual bonus. Great career growth and development opportunities in a global organization A flexible approach to work There are plenty of amazing opportunities at Convera for talented, creative problem solvers who never settle for good enough and are looking to transform Business to Business payments. #LI-KP1

Posted 1 week ago

Apply

2.0 - 3.0 years

4 - 6 Lacs

Bengaluru

On-site

Job Information Number of Positions 1 Industry Engineering Date Opened 06/09/2025 Job Type Permanent Work Experience 2-3 years City Bangalore State/Province Karnataka Country India Zip/Postal Code 560037 Location Bangalore About Us CloudifyOps is a company with DevOps and Cloud in our DNA. CloudifyOps enables businesses to become more agile and innovative through a comprehensive portfolio of services that addresses hybrid IT transformation, Cloud transformation, and end-to-end DevOps Workflows. We are a proud Advance Partner of Amazon Web Services and have deep expertise in Microsoft Azure and Google Cloud Platform solutions. We are passionate about what we do. The novelty and the excitement of helping our customers accomplish their goals drives us to become excellent at what we do. Job Description Culture at CloudifyOps : Working at CloudifyOps is a rewarding experience! Great people, a work environment that thrives on creativity, and the opportunity to take on roles beyond a defined job description are just some of the reasons you should work with us. About the Role: We are seeking a proactive and technically skilled AI/ML Engineer with 2–3 years of experience to join our growing technology team. The ideal candidate will have hands-on expertise in AWS-based machine learning, Agentic AI, and Generative AI tools, especially within the Amazon AI ecosystem. You will play a key role in building intelligent, scalable solutions that address complex business challenges. Key Responsibilities: 1. AWS-Based Machine Learning Develop, train, and fine-tune ML models on AWS SageMaker, Bedrock, and EC2. Implement serverless ML workflows using Lambda, Step Functions, and EventBridge. Optimize models for cost/performance using AWS Inferentia/Trainium. 2. MLOps & Productionization Build CI/CD pipelines for ML using AWS SageMaker Pipelines, MLflow, or Kubeflow. Containerize models with Docker and deploy via AWS EKS/ECS/Fargate. Monitor models in production using AWS CloudWatch, SageMaker Model Monitor. 3. Agentic AI Development Design autonomous agent systems (e.g., AutoGPT, BabyAGI) for task automation. Integrate multi-agent frameworks (LangChain, AutoGen) with AWS services. Implement RAG (Retrieval-Augmented Generation) for agent knowledge enhancement. 4. Generative AI & LLMs Fine-tune and deploy LLMs (GPT-4, Claude, Llama 2/3) using LoRA/QLoRA. Build Generative AI apps (chatbots, content generators) with LangChain, LlamaIndex. Optimize prompts and evaluate LLM performance using AWS Bedrock/Amazon Titan. 5. Collaboration & Innovation Work with cross-functional teams to translate business needs into AI solutions. Collaborate with DevOps and Cloud Engineering teams to develop scalable, production-ready AI systems. Stay updated with cutting-edge AI research (arXiv, NeurIPS, ICML). 5. Governance & Documentation Implement model governance frameworks to ensure ethical AI/ML deployments. Design reproducible ML pipelines following MLOps best practices (versioning, testing, monitoring). Maintain detailed documentation for models, APIs, and workflows (Markdown, Sphinx, ReadTheDocs). Create runbooks for model deployment, troubleshooting, and scaling. Technical Skills Programming: Python (PyTorch, TensorFlow, Hugging Face Transformers). AWS: SageMaker, Lambda, ECS/EKS, Bedrock, S3, IAM. MLOps: MLflow, Kubeflow, Docker, GitHub Actions/GitLab CI. Generative AI: Prompt engineering, LLM fine-tuning, RAG, LangChain. Agentic AI: AutoGPT, BabyAGI, multi-agent orchestration. Data Engineering: SQL, PySpark, AWS Glue/EMR. Soft Skills Strong problem-solving and analytical thinking. Ability to explain complex AI concepts to non-technical stakeholders. What We’re Looking For Bachelor’s/Master’s in CS, AI, Data Science, or related field. 2-3 years of industry experience in AI/ML engineering. Portfolio of deployed ML/AI projects (GitHub, blog, case studies). Good to have an AWS Certified Machine Learning Specialty certification. Why Join Us? Innovative Projects: Work on cutting-edge AI applications that push the boundaries of technology. Collaborative Environment: Join a team of passionate engineers and researchers committed to excellence. Career Growth: Opportunities for professional development and advancement in the rapidly evolving field of AI. Equal opportunity employer CloudifyOps is proud to be an equal opportunity employer with a global culture that embraces diversity. We are committed to providing an environment free of unfair discrimination and harassment. We do not discriminate based on age, race, color, sex, religion, national origin, disability, pregnancy, marital status, sexual orientation, gender reassignment, veteran status, or other protected category.

Posted 1 week ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Position Title: AI/ML Engineer Company : Cyfuture India Pvt. Ltd. Industry : IT Services and IT Consulting Location : Sector 81, NSEZ, Noida (5 Days Work From Office) Website : www.cyfuture.com About Cyfuture Cyfuture is a trusted name in IT services and cloud infrastructure, offering state-of-the-art data center solutions and managed services across platforms like AWS, Azure, and VMWare. We are expanding rapidly in system integration and managed services, building strong alliances with global OEMs like VMWare, AWS, Azure, HP, Dell, Lenovo, and Palo Alto. Position Overview We are hiring an experienced AI/ML Engineer to lead and shape our AI/ML initiatives. The ideal candidate will have hands-on experience in machine learning and artificial intelligence, with strong leadership capabilities and a passion for delivering production-ready solutions. This role involves end-to-end ownership of AI/ML projects, from strategy development to deployment and optimization of large-scale systems. Key Responsibilities Lead and mentor a high-performing AI/ML team. Design and execute AI/ML strategies aligned with business goals. Collaborate with product and engineering teams to identify impactful AI opportunities. Build, train, fine-tune, and deploy ML models in production environments. Manage operations of LLMs and other AI models using modern cloud and MLOps tools. Implement scalable and automated ML pipelines (e.g., with Kubeflow or MLRun). Handle containerization and orchestration using Docker and Kubernetes. Optimize GPU/TPU resources for training and inference tasks. Develop efficient RAG pipelines with low latency and high retrieval accuracy. Automate CI/CD workflows for continuous integration and delivery of ML systems. Key Skills & Expertise 1. Cloud Computing & Deployment Proficiency in AWS, Google Cloud, or Azure for scalable model deployment. Familiarity with cloud-native services like AWS SageMaker, Google Vertex AI, or Azure ML. Expertise in Docker and Kubernetes for containerized deployments Experience with Infrastructure as Code (IaC) using tools like Terraform or CloudFormation. 2. Machine Learning & Deep Learning Strong command of frameworks: TensorFlow, PyTorch, Scikit-learn, XGBoost. Experience with MLOps tools for integration, monitoring, and automation. Expertise in pre-trained models, transfer learning, and designing custom architectures. 3. Programming & Software Engineering Strong skills in Python (NumPy, Pandas, Matplotlib, SciPy) for ML development. Backend/API development with FastAPI, Flask, or Django. Database handling with SQL and NoSQL (PostgreSQL, MongoDB, BigQuery). Familiarity with CI/CD pipelines (GitHub Actions, Jenkins). 4. Scalable AI Systems Proven ability to build AI-driven applications at scale. Handle large datasets, high-throughput requests, and real-time inference. Knowledge of distributed computing: Apache Spark, Dask, Ray. 5. Model Monitoring & Optimization Hands-on with model compression, quantization, and pruning. A/B testing and performance tracking in production. Knowledge of model retraining pipelines for continuous learning. 6. Resource Optimization Efficient use of compute resources: GPUs, TPUs, CPUs. Experience with serverless architectures to reduce cost. Auto-scaling and load balancing for high-traffic systems. 7. Problem-Solving & Collaboration Translate complex ML models into user-friendly applications. Work effectively with data scientists, engineers, and product teams. Write clear technical documentation and architecture reports. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

CACI India, RMZ Nexity, Tower 30 4th Floor Survey No.83/1, Knowledge City Raidurg Village, Silpa Gram Craft Village, Madhapur, Serilingampalle (M), Hyderabad, Telangana 500081, India Req #1097 02 May 2025 CACI International Inc is an American multinational professional services and information technology company headquartered in Northern Virginia. CACI provides expertise and technology to enterprise and mission customers in support of national security missions and government transformation for defense, intelligence, and civilian customers. CACI has approximately 23,000 employees worldwide. Headquartered in London, CACI Ltd is a wholly owned subsidiary of CACI International Inc., a publicly listed company on the NYSE with annual revenue in excess of US $6.2bn. Founded in 2022, CACI India is an exciting, growing and progressive business unit of CACI Ltd. CACI Ltd currently has over 2000 intelligent professionals and are now adding many more from our Hyderabad and Pune offices. Through a rigorous emphasis on quality, the CACI India has grown considerably to become one of the UKs most well-respected Technology centres. About Data Platform The Data Platform will be built and managed “as a Product” to support a Data Mesh organization. The Data Platform focusses on enabling decentralized management, processing, analysis and delivery of data, while enforcing corporate wide federated governance on data, and project environments across business domains. The goal is to empower multiple teams to create and manage high integrity data and data products that are analytics and AI ready, and consumed internally and externally. What does a Data Infrastructure Engineer do? A Data Infrastructure Engineer will be responsible to develop, maintain and monitor the data platform infrastructure and operations. The infrastructure and pipelines you build will support data processing, data analytics, data science and data management across the CACI business. The data platform infrastructure will conform to a zero trust, least privilege architecture, with a strict adherence to data and infrastructure governance and control in a multi-account, multi-region AWS environment. You will use Infrastructure as Code and CI/CD to continuously improve, evolve and repair the platform. You will be able to design architectures and create re-useable solutions to reflect the business needs. Responsibilities Will Include Collaborating across CACI departments to develop and maintain the data platform Building infrastructure and data architectures in Cloud Formation, and SAM. Designing and implementing data processing environments and integrations using AWS PaaS such as Glue, EMR, Sagemaker, Redshift, Aurora and Snowflake Building data processing and analytics pipelines as code, using python, SQL, PySpark, spark, CloudFormation, lambda, step functions, Apache Airflow Monitoring and reporting on the data platform performance, usage and security Designing and applying security and access control architectures to secure sensitive data You Will Have 3+ years of experience in a Data Engineering role. Strong experience and knowledge of data architectures implemented in AWS using native AWS services such as S3, DataZone, Glue, EMR, Sagemaker, Aurora and Redshift. Experience administrating databases and data platforms Good coding discipline in terms of style, structure, versioning, documentation and unit tests Strong proficiency in Cloud Formation, Python and SQL Knowledge and experience of relational databases such as Postgres, Redshift Experience using Git for code versioning, and lifecycle management Experience operating to Agile principles and ceremonies Hands-on experience with CI/CD tools such as GitLab Strong problem-solving skills and ability to work independently or in a team environment. Excellent communication and collaboration skills. A keen eye for detail, and a passion for accuracy and correctness in numbers Whilst not essential, the following skills would also be useful: Experience using Jira, or other agile project management and issue tracking software Experience with Snowflake Experience with Spatial Data Processing More About The Opportunity The Data Engineer is an excellent opportunity, and CACI Services India reward their staff well with a competitive salary and impressive benefits package which includes: Learning: Budget for conferences, training courses and other materials Health Benefits: Family plan with 4 children and parents covered Future You: Matched pension and health care package We understand the importance of getting to know your colleagues. Company meetings are held every quarter, and a training/work brief weekend is held once a year, amongst many other social events. CACI is an equal opportunities employer. Therefore, we embrace diversity and are committed to a working environment where no one will be treated less favourably on the grounds of their sex, race, disability, sexual orientation religion, belief or age. We have a Diversity & Inclusion Steering Group and we always welcome new people with fresh perspectives from any background to join the group An inclusive and equitable environment enables us to draw on expertise and unique experiences and bring out the best in each other. We champion diversity, inclusion and wellbeing and we are supportive of Veterans and people from a military background. We believe that by embracing diverse experiences and backgrounds, we can collaborate to create better outcomes for our people, our customers and our society. Other details Pay Type Salary Apply Now Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About the Job We are seeking a highly skilled AI/ML Engineer with expertise in AWS AI/ML services and a strong understanding of Generative AI using Amazon Bedrock. The ideal candidate will have experience in building, deploying, and optimizing AI/ML models on AWS, integrating LLMs into applications, and leveraging AWS services for scalable AI solutions. Experience Required - 4+ years Key Responsibilities Design, develop, and deploy AI/ML models on AWS, leveraging SageMaker, Bedrock, and related services. Build LLM-based applications using Amazon Bedrock and fine-tune models for specific use cases. Implement RAG (Retrieval-Augmented Generation) and integrate vector databases like OpenSearch, Pinecone, or FAISS. Develop scalable, production-ready ML pipelines using AWS services (Lambda, Step Functions, S3, DynamoDB, etc.). Utilize Bedrock, SageMaker, and custom fine-tuned models to deliver business-driven AI solutions. Work with cross-functional teams to integrate ML models into real-world applications. Ensure AI solutions adhere to best practices for security, compliance, and cost optimization. Stay updated with the latest trends in GenAI, prompt engineering, and AI model optimization. Required Skills Strong expertise in AWS AI/ML stack – Amazon Bedrock, SageMaker, Lambda, Step Functions, S3, DynamoDB, etc. Experience with Generative AI models (GPT, Claude, Mistral, LLaMA, etc.) and fine-tuning techniques. Hands-on experience in Python, TensorFlow, PyTorch, or Hugging Face. Knowledge of vector databases and embedding models. Experience in building secure and scalable AI applications using AWS. Familiarity with MLOps practices, CI/CD for ML models, and cloud automation. Strong problem-solving skills and ability to work in a fast-paced environment. Good to Have Experience with LangChain, Prompt Engineering, and RAG techniques. Understanding of data governance, AI ethics, and responsible AI practices. Certification in AWS Machine Learning Specialty/ Associate or relevant AI certifications. Relevant Skills vector databases, models, mlops, ml, pytorch, s3, lambda, tensorflow, ci/cd, amazon Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Project Role : Software Development Lead Project Role Description : Develop and configure software systems either end-to-end or for a specific stage of product lifecycle. Apply knowledge of technologies, applications, methodologies, processes and tools to support a client, project or entity. Must have skills : Python (Programming Language) Good to have skills : AWS Administration Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Software Engineer with Python expertise, you will develop data-driven applications on AWS. Responsible for the creation of scalable data pipelines and algorithms to process and deliver actionable vehicle data insights. Roles & Responsibilities: 1. Lead the design and development of Python based applications and services 2. Architect and implement cloud-native solutions using AWS services 3. Mentor and guide the Python development team, promoting best practices and code quality 4. Collaborate with data scientists and analysts to implement data processing pipelines 5. Participate in architecture discussions and contribute to technical decision-making 6. Ensure the scalability, reliability, and performance of Python applications on AWS 7. Stay current with Python ecosystem developments, AWS services, and industry best practices Professional & Technical Skills: 1. Python Programming. 2. Web framework expertise (Django, Flask, or FastAPI) 3. Data processing and analysis 4. Database technologies (SQL and NoSQL) 5. API development 6. Significant experience working with AWS Lambda 7. AWS services (e.g., EC2, S3, RDS, Lambda, SageMaker, EMR) with Any AWS certification is a plus. 8. Infrastructure as Code (e.g., AWS CloudFormation, Terraform) 9. Test-Driven Development (TDD) 10. DevOps practices 11. Agile methodologies. 12. Experience with big data technologies and data warehousing solutions on AWS (e.g., Redshift, EMR, Athena). 13. Strong knowledge of AWS platform and services (e.g., EC2, S3, RDS, Lambda, API Gateway, VPC, IAM). Additional Information: 1. The candidate should have a minimum of 5 years of experience in Python Programming. 2. This position is based at our Hyderabad office 3. A 15 years full time education is required (Bachelor of computer science, or any related stream. master’s degree preferred.) Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Project Role : Software Development Lead Project Role Description : Develop and configure software systems either end-to-end or for a specific stage of product lifecycle. Apply knowledge of technologies, applications, methodologies, processes and tools to support a client, project or entity. Must have skills : Python (Programming Language) Good to have skills : AWS Administration Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Software Engineer with Python expertise, you will develop data-driven applications on AWS. Responsible for the creation of scalable data pipelines and algorithms to process and deliver actionable vehicle data insights. Roles & Responsibilities: 1. Lead the design and development of Python based applications and services 2. Architect and implement cloud-native solutions using AWS services 3. Mentor and guide the Python development team, promoting best practices and code quality 4. Collaborate with data scientists and analysts to implement data processing pipelines 5. Participate in architecture discussions and contribute to technical decision-making 6. Ensure the scalability, reliability, and performance of Python applications on AWS 7. Stay current with Python ecosystem developments, AWS services, and industry best practices Professional & Technical Skills: 1. Python Programming. 2. Web framework expertise (Django, Flask, or FastAPI) 3. Data processing and analysis 4. Database technologies (SQL and NoSQL) 5. API development 6. Significant experience working with AWS Lambda 7. AWS services (e.g., EC2, S3, RDS, Lambda, SageMaker, EMR) with Any AWS certification is a plus. 8. Infrastructure as Code (e.g., AWS CloudFormation, Terraform) 9. Test-Driven Development (TDD) 10. DevOps practices 11. Agile methodologies. 12. Experience with big data technologies and data warehousing solutions on AWS (e.g., Redshift, EMR, Athena). 13. Strong knowledge of AWS platform and services (e.g., EC2, S3, RDS, Lambda, API Gateway, VPC, IAM). Additional Information: 1. The candidate should have a minimum of 5 years of experience in Python Programming. 2. This position is based at our Hyderabad office 3. A 15 years full time education is required (Bachelor of computer science, or any related stream. master’s degree preferred.) Show more Show less

Posted 1 week ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Responsibilities: Evaluate and source appropriate cloud infrastructure solutions for machine learning needs, ensuring cost-effectiveness and scalability based on project requirements. Automate and manage the deployment of machine learning models into production environments, ensuring version control for models and datasets using tools like Docker and Kubernetes. Set up monitoring tools to track model performance and data drift, conduct regular maintenance, and implement updates for production models. Work closely with data scientists, software engineers, and stakeholders to align on project goals, facilitate knowledge sharing, and communicate findings and updates to cross-functional teams. Design, implement, and maintain scalable ML infrastructure, optimizing cloud and on-premise resources for training and inference. Document ML processes, pipelines, and best practices while preparing reports on model performance, resource utilization, and system issues. Provide training and support for team members on ML Ops tools and methodologies, and stay updated on industry trends and emerging technologies. Diagnose and resolve issues related to model performance, infrastructure, and data quality, implementing solutions to enhance model robustness and reliability. Education, Technical Skills & Other Critical Requirement: 10+ years of relevant experience in AI/ analytics product & solution delivery Bachelor’s/master’s degree in an information technology/computer science/ Engineering or equivalent fields experience. Proficiency in frameworks such as TensorFlow, PyTorch, or Scikit-learn. Strong skills in Python and/or R; familiarity with Java, Scala, or Go is a plus. Experience with cloud services such as AWS, Azure, or Google Cloud Platform, particularly in ML services (e.g., AWS SageMaker, Azure ML). CI/CD tools (e.g., Jenkins, GitLab CI), containerization (e.g., Docker), and orchestration (e.g., Kubernetes). Experience with databases (SQL and NoSQL), data pipelines, ETL processes, ML pipeline orchestration (Airflow) Familiarity with monitoring and logging tools such as Prometheus, Grafana, or ELK stack. Proficient in using Git for version control. Strong analytical and troubleshooting abilities to diagnose and resolve issues effectively. Good communication skills for working with cross-functional teams and conveying technical concepts to non-technical stakeholders. Ability to manage multiple projects and prioritize tasks in a fast-paced environment. Show more Show less

Posted 1 week ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

🚨 We are Hiring 🚨 https://grhombustech.com/jobs/job-description-senior-test-automation-lead-playwright-ai-ml-focus/ Job Description Job Title: Senior Test Automation Lead – Playwright (AI/ML Focus) Location: Hyderabad Experience: 10 - 12 years Job Type: Full-Time Company Overview: GRhombus Technologies Pvt Ltd, a pioneer in Software Solutions – Especially on Test Automation, Cyber Security, Full Stack Development, DevOps, Salesforce, Performance Testing and Manual Testing. GRhombus delivery centres are located in India at Hyderabad, Chennai, Bengaluru and Pune. In the Middle East, we are located in Dubai. Our partner offices are located in the USA and the Netherlands. About the Role: We are seeking a passionate and technically skilled Senior Test Automation Lead with deep experience in Playwright-based frameworks and a solid understanding of AI/ML-driven applications. In this role, you will lead the automation strategy and quality engineering practices for next-generation AI products that integrate large-scale machine learning models, data pipelines, and dynamic, intelligent UIs. You will define, architect, and implement scalable automation solutions across AI-enhanced features such as recommendation engines, conversational UIs, real-time analytics, and predictive workflows, ensuring both functional correctness and intelligent behavior consistency. Key Responsibilities: Test Automation Framework Design & Implementation Design and implement robust, modular, and extensible Playwright automation frameworks using TypeScript/JavaScript. Define automation design patterns and utilities that can handle complex AI-driven UI behaviors (e.g., dynamic content, personalization, chat interfaces). Implement abstraction layers for easy test data handling, reusable components, and multi-browser/platform execution. AI/ML-Specific Testing Strategy Partner with Data Scientists and ML Engineers to understand model behaviors, inference workflows, and output formats. Develop strategies for testing non-deterministic model outputs (e.g., chat responses, classification labels) using tolerance ranges, confidence intervals, or golden datasets. Design tests to validate ML integration points: REST/gRPC API calls, feature flags, model versioning, and output accuracy. Include bias, fairness, and edge-case validations in test suites where applicable (e.g., fairness in recommendation engines or NLP sentiment analysis). End-to-End Test Coverage Lead the implementation of end-to-end automation for: Web interfaces (React, Angular, or other SPA frameworks) Backend services (REST, GraphQL, WebSockets) ML model integration endpoints (real-time inference APIs, batch pipelines) Build test utilities for mocking, stubbing, and simulating AI inputs and datasets. CI/CD & Tooling Integration Integrate automation suites into CI/CD pipelines using GitHub Actions, Jenkins, GitLab CI, or similar. Configure parallel execution, containerized test environments (e.g., Docker), and test artifact management. Establish real-time dashboards and historical reporting using tools like Allure, ReportPortal, TestRail, or custom Grafana integrations. Quality Engineering & Leadership Define KPIs and QA metrics for AI/ML product quality: functional accuracy, model regression rates, test coverage %, time-to-feedback, etc. Lead and mentor a team of automation and QA engineers across multiple projects. Act as the Quality Champion across the AI platform by influencing engineering, product, and data science teams on quality ownership and testing best practices. Agile & Cross-Functional Collaboration Work in Agile/Scrum teams; participate in backlog grooming, sprint planning, and retrospectives. Collaborate across disciplines: Frontend, Backend, DevOps, MLOps, and Product Management to ensure complete testability. Review feature specs, AI/ML model update notes, and data schemas for impact analysis. Required Skills and Qualifications: Technical Skills: Strong hands-on expertise with Playwright (TypeScript/JavaScript). Experience building custom automation frameworks and utilities from scratch. Proficiency in testing AI/ML-integrated applications: inference endpoints, personalization engines, chatbots, or predictive dashboards. Solid knowledge of HTTP protocols, API testing (Postman, Supertest, RestAssured). Familiarity with MLOps and model lifecycle management (e.g., via MLflow, SageMaker, Vertex AI). Experience in testing data pipelines (ETL, streaming, batch), synthetic data generation, and test data versioning. Domain Knowledge: Exposure to NLP, CV, recommendation engines, time-series forecasting, or tabular ML models. Understanding of key ML metrics (precision, recall, F1-score, AUC), model drift, and concept drift. Knowledge of bias/fairness auditing, especially in UI/UX contexts where AI decisions are shown to users. Leadership & Communication: Proven experience leading QA/Automation teams (4+ engineers). Strong documentation, code review, and stakeholder communication skills. Experience collaborating in Agile/SAFe environments with cross-functional teams. Preferred Qualifications: Experience with AI Explainability frameworks like LIME, SHAP, or What-If Tool. Familiarity with Test Data Management platforms (e.g., Tonic.ai, Delphix) for ML training/inference data. Background in performance and load testing for AI systems using tools like Locust, JMeter, or k6. Experience with GraphQL, Kafka, or event-driven architecture testing. QA Certifications (ISTQB, Certified Selenium Engineer) or cloud certifications (AWS, GCP, Azure). Education: Bachelor’s or Master’s degree in Computer Science, Software Engineering, or related technical discipline. Bonus for certifications or formal training in Machine Learning, Data Science, or MLOps. Why Join Us? At GRhombus, we are redefining quality assurance and software testing with cutting-edge methodologies and a commitment to innovation. As a test automation lead, you will play a pivotal role in shaping the future of automated testing, optimizing frameworks, and driving efficiency across our engineering ecosystem. Be part of a workplace that values experimentation, learning, and professional growth. Contribute to an organisation where your ideas drive innovation and make a tangible impact. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Project Role : Software Development Lead Project Role Description : Develop and configure software systems either end-to-end or for a specific stage of product lifecycle. Apply knowledge of technologies, applications, methodologies, processes and tools to support a client, project or entity. Must have skills : Python (Programming Language) Good to have skills : AWS Architecture Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Software Engineer with Python expertise, you will develop data-driven applications on AWS. Responsible for the creation of scalable data pipelines and algorithms to process and deliver actionable vehicle data insights. Roles & Responsibilities: 1. Lead the design and development of Python based applications and services 2. Architect and implement cloud-native solutions using AWS services 3. Mentor and guide the Python development team, promoting best practices and code 4. quality. 5. Collaborate with data scientists and analysts to implement data processing pipelines 6. Participate in architecture discussions and contribute to technical decision-making 7. Ensure the scalability, reliability, and performance of Python applications on AWS 8. Stay current with Python ecosystem developments, AWS services, and industry best practices Professional & Technical Skills: 1. Python Programming 2. Web framework expertise (Django, Flask, or FastAPI) 3. Data processing and analysis 4. Database technologies (SQL and NoSQL) 5. API development 6. Significant experience working with AWS Lambda 7. AWS services (e.g., EC2, S3, RDS, Lambda, SageMaker, EMR) with Any AWS certification is a plus. 8. Infrastructure as Code (e.g., AWS CloudFormation, Terraform) 9. Test-Driven Development (TDD) 10. DevOps practices 11. Agile methodologies. 12. Experience with big data technologies and data warehousing solutions on AWS (e.g., Redshift, EMR, Athena). 13. Strong knowledge of AWS platform and services (e.g., EC2, S3, RDS, Lambda, API Gateway, VPC, IAM). Additional Information: 1. The candidate should have a minimum of 5 years of experience in Python Programming. 2. This position is based at our Hyderabad office 3. A 15 years full time education is required (Bachelor of computer science, or any related stream. master’s degree preferred.) Show more Show less

Posted 1 week ago

Apply

7.5 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Project Role : Software Development Lead Project Role Description : Develop and configure software systems either end-to-end or for a specific stage of product lifecycle. Apply knowledge of technologies, applications, methodologies, processes and tools to support a client, project or entity. Must have skills : Python (Programming Language) Good to have skills : NA Minimum 7.5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Software Engineer with Python expertise, you will develop data-driven applications on AWS. Responsible for the creation of scalable data pipelines and algorithms to process and deliver actionable vehicle data insights. Roles & Responsibilities: 1. Lead the design and development of Python based applications and services 2. Architect and implement cloud-native solutions using AWS services 3. Mentor and guide the Python development team, promoting best practices and code quality 4. Collaborate with data scientists and analysts to implement data processing pipelines 5. Participate in architecture discussions and contribute to technical decision-making 6. Ensure the scalability, reliability, and performance of Python applications on AWS 7. Stay current with Python ecosystem developments, AWS services, and industry best practices Professional & Technical Skills: 1. Python Programming. 2. Web framework expertise (Django, Flask, or FastAPI) 3. Data processing and analysis 4. Database technologies (SQL and NoSQL) 5. API development 6. Significant experience working with AWS Lambda 7. AWS services (e.g., EC2, S3, RDS, Lambda, SageMaker, EMR) with Any AWS certification is a plus. 8. Infrastructure as Code (e.g., AWS CloudFormation, Terraform) 9. Test-Driven Development (TDD) 10. DevOps practices 11. Agile methodologies. 12. Experience with big data technologies and data warehousing solutions on AWS (e.g., Redshift, EMR, Athena). 13. Strong knowledge of AWS platform and services (e.g., EC2, S3, RDS, Lambda, API Gateway, VPC, IAM). Additional Information: 1. The candidate should have a minimum of 5 years of experience in Python Programming. 2. This position is based at our Hyderabad office 3. A 15 years full time education is required (Bachelor of computer science, or any related stream. master’s degree preferred.) Show more Show less

Posted 1 week ago

Apply

5.0 - 8.0 years

0 Lacs

India

Remote

Linkedin logo

Role: Data Science Developer Location : Remote Responsibilities : Develop and productionize cloud-based services and full-stack applications utilizing NLP solutions, including GenAI models. Implement and manage CI/CD pipelines to ensure efficient and reliable software delivery. Automate cloud infrastructure using Terraform. Write unit tests, integration tests and performance tests Work in a team environment using agile practices Support administration of Data Science experimentation environment including AWS Sagemaker and Nvidia GPU servers Monitor and optimize application performance and infrastructure costs. Collaborate with data scientists and other developers to integrate and deploy data science models into production environments Educate others to improve and coding standards, code quality and test coverage, documentation Work closely with cross-functional teams to ensure seamless integration and operation of services. What We’re Looking For : Basic Required Qualifications : 5-8 years of experience in software engineering Proficiency in Python and JavaScript for full-stack development . Experience in writing and maintaining high quality code – utilizing techniques like unit testing and code reviews Strong understanding of object-oriented design and programming concepts Strong experience with AWS cloud services, including EKS, Lambda, and S3 . Knowledge of Docker containers and orchestration tools including Kubernetes Experience with monitoring, logging, and tracing tools (e.g., Datadog, Kibana, Grafana ). Knowledge of message queues and event-driven architectures (e.g., AWS SQS, Kafka). Experience with CI/CD pipelines in Azure DevOps and GitHub Actions . Additional Preferred Qualifications : Experience writing front-end web applications using Javascript and React Familiarity with infrastructure as code (IaC) using Terraform. Experience in Azure or GPC cloud services Proficiency in C# or Java Experience with SQL and NoSQL databases Knowledge of Machine Learning concepts Experience with Large Language Models Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Python (Programming Language) Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Software Engineer with Python expertise, you will develop data-driven applications on AWS. Responsible for the creation of scalable data pipelines and algorithms to process and deliver actionable vehicle data insights. Roles & Responsibilities: 1. Lead the design and development of Python based applications and services 2. Architect and implement cloud-native solutions using AWS services 3. Mentor and guide the Python development team, promoting best practices and code quality 4. Collaborate with data scientists and analysts to implement data processing pipelines 5. Participate in architecture discussions and contribute to technical decision-making 6. Ensure the scalability, reliability, and performance of Python applications on AWS 7. Stay current with Python ecosystem developments, AWS services, and industry best practices Professional & Technical Skills: 1. Python Programming. 2. Web framework expertise (Django, Flask, or FastAPI) 3. Data processing and analysis 4. Database technologies (SQL and NoSQL) 5. API development 6. Significant experience working with AWS Lambda 7. AWS services (e.g., EC2, S3, RDS, Lambda, SageMaker, EMR) with Any AWS certification is a plus. 8. Infrastructure as Code (e.g., AWS CloudFormation, Terraform) 9. Test-Driven Development (TDD) 10. DevOps practices 11. Agile methodologies. 12. Experience with big data technologies and data warehousing solutions on AWS (e.g., Redshift, EMR, Athena). 13. Strong knowledge of AWS platform and services (e.g., EC2, S3, RDS, Lambda, API Gateway, VPC, IAM). Additional Information: 1. The candidate should have a minimum of 5 years of experience in Python Programming. 2. This position is based at our Hyderabad office 3. A 15 years full time education is required (Bachelor of computer science, or any related stream. master’s degree preferred.) Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Python (Programming Language) Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : Bachelor of Engineering in Electronics or any related stream Summary: As a Sr. Full Stack Engineer, you will develop data-driven applications on AWS for the client. Responsible for the creation of scalable data pipelines and algorithms to process and deliver actionable vehicle data insights. Roles & Responsibilities: 1. Lead the design and development of Python based applications and services 2. Architect and implement cloud-native solutions using AWS services 3. Mentor and guide the Python development team, promoting best practices and code quality 4. Collaborate with data scientists and analysts to implement data processing pipelines 5. Participate in architecture discussions and contribute to technical decision-making 6. Ensure the scalability, reliability, and performance of Python applications on AWS 7. Stay current with Python ecosystem developments, AWS services, and industry best practices. Professional & Technical Skills: 1. At least 5 years of experience in Python Programming with Web framework expertise (Django, Flask, or FastAPI). 2. Exposure on database technologies (SQL and NoSQL) and API development. 3. Significant experience working with AWS services (e.g., EC2, S3, RDS, Lambda, SageMaker, EMR) and Infrastructure as Code (e.g., AWS CloudFormation, Terraform). 4. Exposure on Test-Driven Development (TDD) 5. Practices DevOps in software solution and well-versed with Agile methodologies. 6. AWS certification is a plus. 7. Have well-developed analytical skills, a person who is rigorous but pragmatic, being able to justify decisions with solid rationale. Additional Information: 1. The candidate should have a minimum of 5 years of experience in Python Programming. 2. This position is based at our Hyderabad office 3. A 15 years full time education is required (bachelor’s degree in computer science, Software Engineering, or related field). Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Python (Programming Language) Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : Bachelor of Engineering in Electronics or any related stream Summary: As an IoT Engineer with Python expertise, you will develop data-driven applications on AWS IoT for the client. Responsible for the creation of scalable data pipelines and algorithms to process and deliver actionable vehicle data insights. Roles & Responsibilities: 1. Lead the design and development of Python based applications and services 2. Architect and implement cloud-native solutions using AWS services 3. Collaborate with data scientists and analysts to implement data processing pipelines 4. Participate in architecture discussions and contribute to technical decision-making 5. Ensure the scalability, reliability, and performance of Python applications on AWS 6. Stay current with Python ecosystem developments, AWS services, and industry best practices. Professional & Technical Skills: 1. At least 3 years of experience in Python Programming with integration with AWS IoT core. 2. Exposure on database technologies (SQL and NoSQL) and API development. 3. Significant experience working with AWS services (e.g., EC2, S3, RDS, Lambda, SageMaker, EMR) and Infrastructure as Code (e.g., AWS CloudFormation, Terraform). 4. Exposure on Test-Driven Development (TDD) 5. Practices DevOps in software solution and well-versed with Agile methodologies. 6. AWS certification is a plus. 7. Have well-developed analytical skills, a person who is rigorous but pragmatic, being able to justify decisions with solid rationale. Additional Information: 1. The candidate should have a minimum of 7 years of experience in Python Programming. 2. This position is based at our Hyderabad office 3. A 15 years full time education is required (bachelor’s degree in computer science, Software Engineering, or related field). Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Python (Programming Language) Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Software Engineer with Python expertise, you will develop data-driven applications on AWS. Responsible for the creation of scalable data pipelines and algorithms to process and deliver actionable vehicle data insights. Roles & Responsibilities: 1. Lead the design and development of Python based applications and services 2. Architect and implement cloud-native solutions using AWS services 3. Mentor and guide the Python development team, promoting best practices and code quality 4. Collaborate with data scientists and analysts to implement data processing pipelines 5. Participate in architecture discussions and contribute to technical decision-making 6. Ensure the scalability, reliability, and performance of Python applications on AWS 7. Stay current with Python ecosystem developments, AWS services, and industry best practices Professional & Technical Skills: 1. Python Programming. 2. Web framework expertise (Django, Flask, or FastAPI) 3. Data processing and analysis 4. Database technologies (SQL and NoSQL) 5. API development 6. Significant experience working with AWS Lambda 7. AWS services (e.g., EC2, S3, RDS, Lambda, SageMaker, EMR) with Any AWS certification is a plus. 8. Infrastructure as Code (e.g., AWS CloudFormation, Terraform) 9. Test-Driven Development (TDD) 10. DevOps practices 11. Agile methodologies. 12. Experience with big data technologies and data warehousing solutions on AWS (e.g., Redshift, EMR, Athena). 13. Strong knowledge of AWS platform and services (e.g., EC2, S3, RDS, Lambda, API Gateway, VPC, IAM). Additional Information: 1. The candidate should have a minimum of 5 years of experience in Python Programming. 2. This position is based at our Hyderabad office 3. A 15 years full time education is required (Bachelor of computer science, or any related stream. master’s degree preferred.) Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Python (Programming Language) Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : Bachelor of Engineering in Electronics or any related stream Summary: As a Sr. Backend Engineer, you will develop data-driven applications on AWS for the client. Responsible for the creation of scalable data pipelines and algorithms to process and deliver actionable vehicle data insights. Roles & Responsibilities: 1. Lead the design and development of Python based applications and services 2. Architect and implement cloud-native solutions using AWS services 3. Mentor and guide the Python development team, promoting best practices and code quality 4. Collaborate with data scientists and analysts to implement data processing pipelines 5. Participate in architecture discussions and contribute to technical decision-making 6. Ensure the scalability, reliability, and performance of Python applications on AWS 7. Stay current with Python ecosystem developments, AWS services, and industry best practices Professional & Technical Skills: 1. At least 3 years of experience in Python Programming with Web framework expertise (Django, Flask, or FastAPI). 2. Exposure on database technologies (SQL and NoSQL) and API development. 3. Significant experience working with AWS services (e.g., EC2, S3, RDS, Lambda, SageMaker, EMR) and Infrastructure as Code (e.g., AWS CloudFormation, Terraform). 4. Exposure on Test-Driven Development (TDD) 5. Practices DevOps in software solution and well-versed with Agile methodologies. 6. AWS certification is a plus. 7. Have well-developed analytical skills, a person who is rigorous but pragmatic, being able to justify decisions with solid rationale Additional Information: 1. The candidate should have a minimum of 7 years of experience in Python Programming. 2. This position is based at our Hyderabad office 3. A 15 years full time education is required (bachelor’s degree in computer science, Software Engineering, or related field). Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Python (Programming Language) Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Software Engineer with Python expertise, you will develop data-driven applications on AWS. Responsible for the creation of scalable data pipelines and algorithms to process and deliver actionable vehicle data insights. Roles & Responsibilities: 1. Lead the design and development of Python based applications and services 2. Architect and implement cloud-native solutions using AWS services 3. Mentor and guide the Python development team, promoting best practices and code quality 4. Collaborate with data scientists and analysts to implement data processing pipelines 5. Participate in architecture discussions and contribute to technical decision-making 6. Ensure the scalability, reliability, and performance of Python applications on AWS 7. Stay current with Python ecosystem developments, AWS services, and industry best practices Professional & Technical Skills: 1. Python Programming 2. Web framework expertise (Django, Flask, or FastAPI) 3. Data processing and analysis 4. Database technologies (SQL and NoSQL) 5. API development 6. Significant experience working with AWS Lambda 7. AWS services (e.g., EC2, S3, RDS, Lambda, SageMaker, EMR) with Any AWS certification is a plus. 8. Infrastructure as Code (e.g., AWS CloudFormation, Terraform) 9. Test-Driven Development (TDD) 10. DevOps practices 11. Agile methodologies. 12. Experience with big data technologies and data warehousing solutions on AWS (e.g., Redshift, EMR, Athena). 13. Strong knowledge of AWS platform and services (e.g., EC2, S3, RDS, Lambda, API Gateway, VPC, IAM). Additional Information: 1. The candidate should have a minimum of 3 years of experience in Python Programming. 2. This position is based at our Hyderabad office 3. A 15 years full time education is required (Bachelor of computer science, or any related stream. master’s degree preferred.) Show more Show less

Posted 1 week ago

Apply

7.0 years

0 Lacs

India

On-site

Linkedin logo

WhizzHR is hiring Media Solution Architect – AI/ML & Automation Focus Role Summary: We are seeking a Media Solution Architect to lead the strategic design of AI-driven and automation-centric solutions across digital media operations. This role involves architecting intelligent, scalable systems that enhance efficiency across campaign setup, trafficking, reporting, QA, and billing processes. The ideal candidate will bring a strong blend of automation, AI/ML, and digital marketing expertise to drive innovation and operational excellence. Key Responsibilities: Identify and assess opportunities to apply AI/ML and automation across media operations workflows (e.g., intelligent campaign setup, anomaly detection in QA, dynamic taxonomy validation). Design scalable, intelligent architectures using a combination of machine learning models, RPA, Python-based automation, and media APIs (e.g., Meta, DV360, YouTube). Develop or integrate machine learning models for use cases such as performance prediction, media mix modeling, and anomaly detection in reporting or billing. Ensure adherence to best practices in data governance, compliance, and security, particularly around AI system usage. Partner with business stakeholders to prioritize high-impact AI/automation use cases and define clear ROI and success metrics. Stay informed on emerging trends in AI/ML and translate innovations into actionable media solutions. Ideal Profile: 7+ years of experience in automation, AI/ML, or data science, including 3+ years in marketing, ad tech, or digital media. Strong understanding of machine learning frameworks for predictive modeling, anomaly detection, and NLP-based insight generation. Proficiency in Python and libraries such as scikit-learn, TensorFlow, pandas, or PyTorch. Experience with cloud-based AI platforms (e.g., Google Vertex AI, Azure ML, AWS Sagemaker) and media API integrations. Ability to architect AI-enhanced automations that improve forecasting, QA, and decision-making in media operations. Familiarity with RPA tools (e.g., UiPath, Automation Anywhere); AI-first automation experience is a plus. Demonstrated success in developing or deploying ML models for campaign optimization, fraud detection, or process intelligence. Familiarity with digital media ecosystems such as Google Ads, Meta, TikTok, DSPs, and ad servers. Excellent communication and stakeholder management skills, with the ability to translate technical solutions into business value. Kindly share your Resume at Hello@whizzhr.com Show more Show less

Posted 1 week ago

Apply

7.5 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : AWS Glue Good to have skills : NA Minimum 7.5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As part of a Data Transformation programme you will be part of the Data Marketplace team. In this team you will be responsible for Architecture and design for automating data management compliance validation, monitoring, and reporting through rule-based and AI-driven mechanisms, integrating with metadata repositories and governance tools for real-time policy enforcement and for delivering design specifications for real-time metadata integration, enhanced automation, audit logging, monitoring capabilities, and lifecycle management (including version control, decommissioning, and rollback) Preferably experience with the implementation and adaptation of data management and data governance controls around Data Product implementations, preferably on AWS. Experience with AI appreciated. Examples skills – Data Architecture, Data Marketplace, Data governance, Data Engineering, AWS DataZone, AWS Sagemaker Unified Studio As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding your team in implementing effective solutions. You will also engage in strategic planning sessions to align project goals with organizational objectives, ensuring that all stakeholders are informed and involved in the decision-making process. Your role will require a balance of technical expertise and leadership skills to drive project success and foster a collaborative team environment. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Facilitate knowledge sharing sessions to enhance team capabilities. - Monitor project progress and implement necessary adjustments to meet deadlines. Professional & Technical Skills: - Must To Have Skills: Proficiency in AWS Glue. - Strong understanding of data integration and ETL processes. - Experience with cloud computing platforms and services. - Familiarity with data warehousing concepts and best practices. - Ability to troubleshoot and optimize data workflows. Additional Information: - The candidate should have minimum 7.5 years of experience in AWS Glue. - This position is based in Pune. - A 15 years full time education is required. Show more Show less

Posted 1 week ago

Apply

20.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Title: Staff AI Engineer - MLOps Company: Rapid7 Team: AI Center of Excellence Team Overview: Cross-functional team of Data Scientists and AI Engineers Mission: Leverage AI/ML to protect customer attack surfaces Partners with Detection and Response teams, including MDR Encourages creativity, collaboration, and research publication Uses 20+ years of threat analysis and growing patent portfolio Tech Stack: Cloud/Infra: AWS (SageMaker, Bedrock), EKS, Terraform Languages/Tools: Python, Jupyter, NumPy, Pandas, Scikit-learn ML Focus: Anomaly detection, unlabeled data Role Summary: Build and deploy ML production systems Manage end-to-end data pipelines and ensure data quality Implement ML guardrails and robust monitoring Deploy web apps and REST APIs with strong data security Share knowledge, mentor engineers, collaborate cross-functionally Embrace agile, iterative development Requirements: 8–12 years in Software Engineering (3+ in ML deployment on AWS) Strong in Python, Flask/FastAPI, API development Skilled in CI/CD, Docker, Kubernetes, MLOps, cloud AI tools Experience in data pre-processing, feature engineering, model monitoring Strong communication and documentation skills Collaborative mindset, growth-oriented problem-solving Preferred Qualifications: Experience with Java Background in the security industry Familiarity with AI/ML model operations, LLM experimentation Knowledge of model risk management (drift monitoring, hyperparameter tuning, registries) About Rapid7: Rapid7 is committed to securing the digital world through passion, collaboration, and innovation. With over 10,000 customers globally, it offers a dynamic, growth-focused workplace and tackles major cybersecurity challenges with diverse teams and a mission-driven approach. 4o Show more Show less

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies