Jobs
Interviews

14428 Orchestration Jobs - Page 23

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We are seeking a highly skilled and hands-on AI Architect to lead the design and deployment of next-generation AI systems for our cutting-edge platform. You will be responsible for architecting scalable GenAI and machine learning solutions, establishing MLOps best practices, and ensuring robust security and cost-efficient operations across our AI-powered modules Primary Skills: • System architecture for GenAI: design scalable pipelines using LLMs, RAG, multi‐agent orchestration (LangGraph, CrewAI, AutoGen). • Machine‐learning engineering: PyTorch or TensorFlow, Hugging Face Transformers. • Retrieval & vector search: FAISS, Weaviate, Pinecone, pgvector; embedding selection and index tuning. • Cloud infra: AWS production experience (GPU instances, Bedrock / Vertex AI, EKS, IAM, KMS). • MLOps & DevOps: MLflow / Kubeflow, Docker + Kubernetes, CI/CD, Terraform • Security & compliance: data encryption, RBAC, PII redaction in LLM prompts. • Cost & performance optimisation: token‐usage budgeting, caching, model routing. • Stakeholder communication: ability to defend architectural decisions to CTO, product, and investors.

Posted 4 days ago

Apply

7.0 years

3 - 6 Lacs

Bhopal

On-site

We are seeking a highly skilled and experienced Senior DevOps Engineer with a minimum of 7 years of professional experience, including at least 5 years in designing, implementing, and managing large-scale IT infrastructures on AWS and/or Azure . The ideal candidate must have strong hands-on expertise in Docker and Kubernetes , along with cloud-native architectures, automation, CI/CD, monitoring, and DevSecOps practices. Key Responsibilities: Design, implement, and manage scalable and secure cloud infrastructure using AWS and/or Azure services. Build and manage CI/CD pipelines using tools such as Jenkins, GitLab CI, GitHub Actions, or Azure DevOps. Create and maintain Infrastructure as Code using tools like Terraform, CloudFormation, or ARM templates. Lead cloud architecture planning , capacity management, and disaster recovery implementations. Build, deploy, and orchestrate containers using Docker and Kubernetes (EKS/AKS) . Implement and manage observability stacks using Prometheus, Grafana, ELK Stack, CloudWatch, Azure Monitor , etc. Ensure cloud security and governance policies are implemented and followed. Optimize infrastructure performance and cost on cloud environments. Mentor team members and promote DevOps best practices across the organization. Troubleshoot infrastructure, application, and network issues in production and development environments. Mandatory Skills & Qualifications: B.E./B.Tech/MCA degree from a recognized institution. Minimum 7 years of overall experience in DevOps, Infrastructure, or Cloud Engineering. At least 5 years of hands-on experience with AWS and/or Azure . Strong proficiency in Docker for containerization. In-depth experience with Kubernetes for container orchestration (AKS/EKS preferred). Expertise in Infrastructure as Code (Terraform, CloudFormation, ARM). Hands-on experience with CI/CD pipelines and tools like Jenkins, GitLab CI/CD, Azure DevOps. Proficient in scripting languages such as Bash, Python, or PowerShell . Strong knowledge of Linux systems administration and networking fundamentals. Solid understanding of Git and source control workflows. Familiarity with security standards and cloud compliance frameworks is a plus. Excellent analytical and troubleshooting skills. Preferred Certifications (optional): AWS Certified DevOps Engineer / Solutions Architect Microsoft Certified: Azure DevOps Engineer / Solutions Architect Certified Kubernetes Administrator (CKA) Contact : 7418252567 , 8778852267, 7904349866 Job Type: Full-time Pay: ₹25,000.00 - ₹50,000.00 per month Work Location: In person Speak with the employer +91 7845416995

Posted 4 days ago

Apply

5.0 years

3 - 5 Lacs

Vadodara

On-site

Role overview As a Senior AI Architect Engineer, you will be instrumental in designing and orchestrating advanced AI solutions that integrate seamlessly into our business processes and technology ecosystem. Your role will focus on creating scalable, intelligent architectures that drive innovation, operational efficiency, and business transformation. By aligning AI strategy with organizational goals, you will ensure our AI initiatives are robust, adaptable, and positioned to meet future challenges, enabling a competitive advantage in an evolving market. The role To support our strategic growth and innovation agenda, we seek a seasoned a Senior AI Architect Engineer who can lead complex AI projects and foster cross-functional collaboration. Expertise and experience Minimum of 5 years of experience in architecture, machine learning engineering, or data science, with a focus on delivering enterprise-grade AI solutions. Proven track record in designing, deploying, and operationalizing AI/ML models in production environments. Deep familiarity with AI/ML frameworks, and experience with cloud AI platforms like AWS SageMaker, Azure AI, or Google AI Platform. Strong expertise in data engineering, data modelling, and integration, with the ability to architect end-to-end AI pipelines. Proficiency in programming languages including Python, SQL, and experience with containerization (Docker) and orchestration (Kubernetes). Experience with MLOps practices including CI/CD pipelines, model monitoring, and automated retraining. Solid understanding of AI ethics, data privacy, and regulatory compliance. Technical Leadership Lead the design and implementation of scalable AI architectures that solve complex business problems and integrate with existing systems. Define and enforce AI development standards, best practices, and governance frameworks across projects. Drive technology evaluation and selection to ensure the adoption of the most effective AI tools and platforms. Implement robust data security, privacy controls, and ethical AI guidelines within all AI initiatives. Ensure model accuracy, reliability, and maintainability through rigorous validation and continuous monitoring. Foster innovation by integrating emerging AI technologies and methodologies into business solutions. Mentor and guide cross-functional teams, promoting AI literacy and excellence. Architect and maintain cloud-native AI infrastructure to support scalability and agility. Champion automation and orchestration to streamline AI workflows and deployment processes.

Posted 4 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Description Are you an innovative and accomplished professional seeking for a role with a significant impact and growth? Amazon is looking for a dynamic Software Development Engineer to join our Core Services team under Worldwide Customer purchase Journey. The Shipping and Region Authority (SARA) organization innovates on foundational products that shape the customer shopping journey, beginning from the gateway page of their visit through search and discovery experiences. SARA’s products also help drive checkout and fulfillment customer experiences. Through a complex orchestration of its four domains (Shipping, Regions, Locations, Restrictions), SARA influences and frames the shopping CX. Our systems are architected for scale and consistency, offering configurable, flexible, and global solutions (standardized globally but customized for local regulations). We integrate with multiple cross technology and functional services to identify customer locations , identify the shipping options and apply sales and shipping restrictions. In this role, you will scope complex projects and deliver simple, elegant solutions by collecting product and business requirements, driving the development schedule from design to release, making appropriate trade-offs to optimize time-to-market, and clearly communicating goals, roles, responsibilities, and desired outcomes to internal cross-functional teams. You will interact with a broad cross-section of the Amazon organization, clarify ambiguous issues, and negotiate effective technical solutions between development and business teams. You will anticipate bottlenecks and escalate issues when required to ensure on-time delivery. This role requires a seasoned individual with excellent experience as a Software Development Engineer for distributed SOA software systems and the ability to guide high-level technical design while considering potential future areas of fraud our platform might encounter. Key job responsibilities Collaborate with experienced cross-disciplinary Amazonian to conceive, design, and bring innovative products and services to market. Design and build innovative technologies in a large distributed computing environment and help lead fundamental changes in the industry. Create solutions to run predictions on distributed systems with exposure to innovative technologies at incredible scale and speed. Build distributed storage, index, and query systems that are scalable, fault-tolerant, low cost, and easy to manage/use. Design and code the right solutions starting with broadly defined problems. Work in an agile environment to deliver high-quality software. Key job responsibilities Collaborate with experienced cross-disciplinary Amazonian to conceive, design, and bring innovative products and services to market. Design and build innovative technologies in a large distributed computing environment and help lead fundamental changes in the industry. Create solutions to run predictions on distributed systems with exposure to innovative technologies at incredible scale and speed. Build distributed storage, index, and query systems that are scalable, fault-tolerant, low cost, and easy to manage/use. Design and code the right solutions starting with broadly defined problems. Work in an agile environment to deliver high-quality software. Basic Qualifications 3+ years of non-internship professional software development experience 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience 3+ years of Video Games Industry (supporting title Development, Release, or Live Ops) experience Experience programming with at least one software programming language Bachelor's degree or equivalent Preferred Qualifications 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A3047593

Posted 4 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Description Are you an innovative and accomplished professional seeking for a role with a significant impact and growth? Amazon is looking for a dynamic Software Development Engineer to join our Core Services team under Worldwide Customer purchase Journey. The Shipping and Region Authority (SARA) organization innovates on foundational products that shape the customer shopping journey, beginning from the gateway page of their visit through search and discovery experiences. SARA’s products also help drive checkout and fulfillment customer experiences. Through a complex orchestration of its four domains (Shipping, Regions, Locations, Restrictions), SARA influences and frames the shopping CX. Our systems are architected for scale and consistency, offering configurable, flexible, and global solutions (standardized globally but customized for local regulations). We integrate with multiple cross technology and functional services to identify customer locations , identify the shipping options and apply sales and shipping restrictions. In this role, you will scope complex projects and deliver simple, elegant solutions by collecting product and business requirements, driving the development schedule from design to release, making appropriate trade-offs to optimize time-to-market, and clearly communicating goals, roles, responsibilities, and desired outcomes to internal cross-functional teams. You will interact with a broad cross-section of the Amazon organization, clarify ambiguous issues, and negotiate effective technical solutions between development and business teams. You will anticipate bottlenecks and escalate issues when required to ensure on-time delivery. This role requires a seasoned individual with excellent experience as a Software Development Engineer for distributed SOA software systems and the ability to guide high-level technical design while considering potential future areas of fraud our platform might encounter. Key job responsibilities Collaborate with experienced cross-disciplinary Amazonian to conceive, design, and bring innovative products and services to market. Design and build innovative technologies in a large distributed computing environment and help lead fundamental changes in the industry. Create solutions to run predictions on distributed systems with exposure to innovative technologies at incredible scale and speed. Build distributed storage, index, and query systems that are scalable, fault-tolerant, low cost, and easy to manage/use. Design and code the right solutions starting with broadly defined problems. Work in an agile environment to deliver high-quality software. Basic Qualifications 3+ years of non-internship professional software development experience 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience 3+ years of Video Games Industry (supporting title Development, Release, or Live Ops) experience Experience programming with at least one software programming language Bachelor's degree or equivalent Preferred Qualifications 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A3047592

Posted 4 days ago

Apply

3.0 years

0 Lacs

India

Remote

Thinkgrid Labs is at the forefront of innovation in custom software development. Our expert team of software engineers, architects, and UI/UX designers specialises in crafting bespoke web, mobile, and cloud applications, along with AI solutions and intelligent bots. Serving a diverse range of industries, we have a global client base across five continents. Our commitment to quality and passion for technological advancement drive us to push boundaries and set new standards. We're expanding our team with smart and creative individuals who are passionate about building high-performance, user-friendly, flexible, and maintainable software. We are hiring a Health Information Exchange (HIE) Software Engineer to work on projects for clients outside of India, so excellent oral and written communication skills are a must. Job Title : Health Information Exchange (HIE) Software Engineer Location : Remote Working Hours : 3 PM IST to 12 AM IST Experience Required : Minimum 3 years Education : Bachelor’s or Master’s degree in Computer Science or Health Informatics Who you are: HIE Standards Specialist: Deep, practical knowledge of IHE profiles and ITI transactions—PIX/PDQ, XDS.b, XCA, XCDR/XCT, XCPD, XDW—and familiarity with HL7 v2/v3, CDA, and FHIR. Integration Engineer: Proven experience building and securing SOAP and RESTful services, handling message transformation (Mirth Connect, Iguana, Apache Camel, or similar), and integrating with EMR/EHR systems. Master Patient Index (MPI) Pro: Hands-on experience implementing or integrating enterprise/clinical MPIs, probabilistic or deterministic matching algorithms, and patient de-duplication strategies. Cloud-Native Developer: Proficient in one or more modern stacks—Java/Spring Boot, .NET Core, Node.js/TypeScript, or Python/FastAPI—with microservices architecture, containerisation (Docker, Kubernetes), and deployments on AWS / Azure / GCP. Security & Compliance Aficionado: Working knowledge of HIPAA, CMS, ONC Certification criteria, TEFCA, OAuth 2.0/OIDC, and TLS/MTLS for secure data exchange. Quality Champion: Comfortable with IHE Gazelle, NIST XDS tools, Touchstone, or similar test harnesses to validate conformance and performance. Problem Solver & Team Player: Thrive in an agile, distributed, cross-functional environment; able to communicate complex technical ideas clearly to non-technical stakeholders. Passionate & Humble: Enthusiastic about improving healthcare data exchange and willing to learn continuously while empowering teammates. What you will be doing: Design & Architecture: Define HIE solution architectures, data models, and APIs that implement IHE ITI profiles (PIX/PDQ, XDS.b, XCA, XCPD, XCDR, etc.)—including security, scalability, and high availability considerations. Development & Integration: Build and maintain services, adapters, and orchestration workflows to ingest, store, query, and retrieve clinical documents and images across disparate systems. Implement enterprise or federated MPI services with robust patient-matching logic and reconciliation workflows. Standards Conformance & Validation: Configure and execute automated test suites using Gazelle EVS Client, NIST validators, Inferno, or custom Postman collections to ensure full IHE/HL7 compliance. Performance Optimisation & Monitoring: Profile message throughput, tweak database indexes (SQL/NoSQL), and fine-tune document repository/registry performance; set up dashboards (Prometheus/Grafana, CloudWatch, or Azure Monitor). DevOps & CI/CD: Automate build, test, and deployment pipelines (GitHub Actions, Azure DevOps, Jenkins, or GitLab CI) and manage infrastructure as code (Terraform, CloudFormation). Security & Compliance: Enforce role-based access controls, audit logging, encryption in transit/at rest, and risk mitigation strategies aligned with HIPAA and ISO 27001 standards. Documentation & Knowledge Sharing: Produce technical design docs, sequence diagrams, data-flow diagrams, and API specs; guide junior engineers and collaborate closely with QA, analysts, and customer teams. Continuous Improvement: Stay current with evolving IHE profiles (e.g., Mobile Health Document Sharing), FHIR-based exchange initiatives, and industry best practices; recommend enhancements to keep our HIE offerings cutting-edge. Benefits 5 day work week (unless for rare emergencies) 100 % remote setup with flexible work culture and international exposure Opportunity to work on mission-critical healthcare projects impacting providers and patients globally

Posted 4 days ago

Apply

2.0 - 5.0 years

4 Lacs

Ahmedabad

On-site

MERN Stack Developer | Work From Office | Ahmedabad reverseBits is seeking a talented MERN developer to join our team. We are looking for someone with 2-5 years of experience in MERN stack development. You will develop and maintain high-quality web applications built in Javascript based frameworks and Relational & NoSQL databases for high-scale products/systems. Responsibilities: Collaborate with cross-functional teams to understand business requirements and translate them into web application features. Develop and maintain production systems and databases and collaborate with the DevOps team for cloud operations. Write clean, efficient, and reusable code following coding standards and best practices. Conduct thorough testing and debugging to ensure system functionality and performance. Stay updated with the latest trends and technologies in Javascript development to enhance your technical skills. Troubleshoot and resolve issues reported by users and stakeholders Participate in code reviews to maintain code quality and improve team productivity Troubleshooting and resolving issues in production environments, ensuring high availability and minimal downtime. Skills and Qualifications: At least 2 years of professional hands-on experience in NextJS, NestJS and React JS Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field. Strong proficiency in Javascript backend frameworks (Node JS and Express JS). Experience designing and developing RESTful APIs and Microservices architecture. Solid understanding of database systems (MongoDB, MySQL) and data modelling concepts. Basic Familiarity with cloud platforms such as AWS. Hands-on experience in MongoDB is must Proficiency in version control systems (Git) and collaborative development workflows. Excellent problem-solving skills and a proactive attitude toward addressing challenges. Strong communication skills and ability to work effectively in a collaborative team environment. Prior experience working in an Agile/Scrum development environment. Bonus points if you have... Experience with containerization and orchestration tools (Docker, Kubernetes) is a plus. Apply here - https://tally.so/r/nGlzvL, to be considered for the role Job Types: Full-time, Permanent Pay: From ₹40,000.00 per month Schedule: Day shift Ability to commute/relocate: Ahmedabad, Gujarat: Reliably commute or planning to relocate before starting work (Required) Education: Bachelor's (Required) Experience: MERN Stack Developer: 2 years (Required) TypeScript: 1 year (Required) Location: Ahmedabad, Gujarat (Required) Work Location: In person

Posted 4 days ago

Apply

0 years

0 Lacs

India

Remote

About Company Our client is a trusted global innovator of IT and business services. We help clients transform through consulting, industry solutions, business process services, digital & IT modernization and managed services. Our client enables them, as well as society, to move confidently into the digital future. We are committed to our clients’ long-term success and combine global reach with local client attention to serve them in over 50 countries around the globe. Job Title: Python Developer with Azure & AKS Location: Noida / Remote Experience: 7+ yrs Job Type : Contract to hire Notice Period:- Immediate joiner Mandatory Skills · Hands-on experience with Python Developer with Azure & AKS. Hands-on experience with Azure Kubernetes Service (AKS) — deploying, managing, and troubleshooting applications on AKS. Strong knowledge of containerisation using Docker and orchestration using Kubernetes with Python. Familiarity with Azure services like Azure Blob Storage, Azure Functions, Azure Service Bus, Azure Key Vault, etc. Experience in implementing CI/CD pipelines using Azure DevOps, GitHub Actions, or similar tools. Knowledge of infrastructure such as code (IaC) tools like Terraform, Bicep, or ARM templates. Familiarity with monitoring and logging tools in Azure — e.g., Application Insights, Log Analytics, and Azure Monitor. Understanding cloud security, networking, and resource management best practices in a production Azure environment. Experience working in DevOps-enabled teams following Agile and iterative development. Responsibilities Writing clean, high-quality, high-performance, maintainable code Develop and support software including applications, database integration, interfaces, and new functionality enhancements Coordinate cross-functionally to insure project meets business objectives and compliance standards Support test and deployment of new products and features Participate in code reviews. Qualifications Bachelor's degree in Computer Science (or related field)

Posted 4 days ago

Apply

2.0 years

3 - 8 Lacs

Calcutta

On-site

Job description Capgemini’s Connected Marketing Operations practice offers and delivers Marketing Operations services to its top fortune 500 clients. Our portfolio of services is focused on delivering latest and best in Content Operations, Campaign Services and Performance Marketing solutions to drive marketing and sales outcomes for the clients. We are looking for a results-oriented senior leader to lead the global delivery & client relationship management for multiple projects. If you are driven by hyper growth challenge and love to wow the clients with your innovative solutions, then this is just the right leadership role for you! Primary Skills The role responsibilities include: Responsible for delivery excellence of all programs and accounts rolling up to the practice through strong governance and review mechanism. Continual Innovation aimed at creating future proof solutions for the marketing functions with a focus on industrialization, delivery process standardization and reuse across the marketing operations & digital marketing scope. Develop use cases in the generative AI and other technologies prevalent for marketing process optimization. Accurately forecast revenue, head count, profitability, margins, bill rates and utilization. Ensure attention to demand prediction and fulfilment across the MU Represent Capgemini in client steering committee meetings. Build strong executive connects to enable management of client expectations and foster lasting client relationships. Continually seek opportunities to increase customer satisfaction and deepen client relationships. Work closely to ensure that the operational parameters are green. Work closely & collaborate with Practice/ Global Account Managers/AE/BDE in a collaborative manner to grow the business across various Industry verticals and the market units and ensure the delivery function runs efficiently. Identify business development and "add-on" sales opportunities in existing programs. While the primary function will be development and delivery of programs within the MU, he/she will also have the responsibility to look ahead into the next 2-3 years and ensure that a strategic road map is in place for the future. This will be done in conjunction with AE/BDEs/Sales Leaders. Secondary Skills Our Ideal Candidate He/She/They OR, the incumbent will have 18+ years’ experience with a large marketing shared services or marketing service provider with a strong project track record. Minimum 18 years’ experience in delivery management comprising of engagements for global clients in Marketing Operations areas – Artwork Management, Media and Creative, Advertising Operations, Marketing Asset Management, Product Data Orchestration, Innovation Project management Experience in managing big P&Ls for operations/delivery for international clients Demonstrated ability to influence without formal authority within cross-functional teams on adopting new ways of working. Previous experience successfully leading large delivery teams (400+) of marketing specialists with a strong focus on talent management. Good understanding of the latest tech and platforms in marketing domains including GenAI Previous experience with leading delivery in a recognized agency will be an added advantage. Exceptional communication skills Experience with international clients mandatory Working experience with cross cultural teams spread across India, Latin America and European centers is required.

Posted 4 days ago

Apply

5.0 years

0 Lacs

West Bengal

On-site

Job Information Date Opened 30/07/2025 Job Type Full time Industry IT Services Work Experience 5+ Years City Kolkata Province West Bengal Country India Postal Code 700091 About Us We are a fast growing technology company specializing in current and emerging internet, cloud and mobile technologies. Job Description CodelogicX is a forward-thinking tech company dedicated to pushing the boundaries of innovation and delivering cutting-edge solutions. We are seeking a Senior DevOps Engineer with at least 5 years of hands-on experience in building, managing, and optimizing scalable infrastructure and CI/CD pipelines. The ideal candidate will play a crucial role in automating deployment workflows, securing cloud environments and managing container orchestration platforms. You will leverage your expertise in AWS, Kubernetes, ArgoCD, and CI/CD to streamline our development processes, ensure the reliability and scalability of our systems, and drive the adoption of best practices across the team. Key Responsibilities: Design, implement, and maintain CI/CD pipelines using GitHub Actions and Bitbucket Pipelines. Develop and manage Infrastructure as Code (IaC) using Terraform for AWS-based infrastructure. Setup and administer SFTP servers on cloud-based VMs using chroot configurations and automate file transfers to S3-backed Glacier . Manage SNS for alerting and notification integration. Ensure cost optimization of AWS services through billing reviews and usage audits. Implement and maintain secure secrets management using AWS KMS , Parameter Store , and Secrets Manager . Configure, deploy, and maintain a wide range of AWS services, including but not limited to: Compute Services o Provision and manage compute resources using EC2, EKS, AWS Lambda, and EventBridge for compute-driven, serverless and event-driven architectures. Storage & Content Delivery o Manage data storage and archival solutions using S3, Glacier, and content delivery through CloudFront. Networking & Connectivity o Design and manage secure network architectures with VPCs, Load Balancers, Security Groups, VPNs, and Route 53 for DNS routing and failover. Ensure proper functioning of Network Services like TCP/IP, reverse proxies (e.g., NGINX). Monitoring & Observability o Implement monitoring, logging, and tracing solutions using CloudWatch, Prometheus, Grafana, ArgoCD, and OpenTelemetry to ensure system health and performance visibility. Database Services o Deploy and manage relational databases via RDS for MySQL, PostgreSQL, Aurora, and healthcare-specific FHIR database configurations. Security & Compliance o Enforce security best practices using IAM (roles, policies), AWS WAF, Amazon Inspector, GuardDuty, Security Hub, and Trusted Advisor to monitor, detect, and mitigate risks. GitOps o Apply excellent knowledge of GitOps practices, ensuring all infrastructure and application configuration changes are tracked and versioned through Git commits. Architect and manage Kubernetes environments (EKS) , implementing Helm charts, ingress controllers, autoscaling (HPA/VPA), and service meshes (Istio), troubleshoot advanced issues related to pods, services, DNS, and kubelets. Apply best practices in Git workflows (trunk-based, feature branching) in both monorepo and multi-repo environments. Maintain, troubleshoot, and optimize Linux-based systems (Ubuntu, CentOS, Amazon Linux). Support the engineering and compliance teams by addressing requirements for HIPAA, GDPR, ISO 27001, SOC 2 , and ensuring infrastructure readiness. Perform rollback and hotfix procedures with minimal downtime. Collaborate with developers to define release and deployment processes. Manage and standardize build environments across dev, staging, and production. Manage release and deployment processes across dev, staging, and production. Work cross-functionally with development and QA teams. Lead incident postmortems and drive continuous improvement. Perform root cause analysis and implement corrective/preventive actions for system incidents. Set up automated backups/snapshots, disaster recovery plans, and incident response strategies. Ensure on-time patching. Mentor junior DevOps engineers. Requirements Required Qualifications: Bachelor's degree in Computer Science, Engineering, or equivalent practical experience. 5+ years of proven DevOps engineering experience in cloud-based environments. Advanced knowledge of AWS , Terraform , CI/CD tools , and Kubernetes (EKS) . Strong scripting and automation mindset. Solid experience with Linux system administration and networking. Excellent communication and documentation skills. Ability to collaborate across teams and lead DevOps initiatives independently. Preferred Qualifications: Experience with infrastructure as code tools such as Terraform or CloudFormation. Experience with GitHub Actions is a plus. Certifications in AWS (e.g., AWS DevOps Engineer, AWS SysOps Administrator) or Kubernetes (CKA/CKAD). Experience working in regulated environments (e.g., healthcare or fintech). Exposure to container security tools and cloud compliance scanners. Experience: 5-10 Years Working Mode: Hybrid Job Type: Full-Time Location: Kolkata Benefits Health insurance Hybrid working mode Provident Fund Parental leave Yearly Bonus Gratuity

Posted 4 days ago

Apply

0 years

3 Lacs

Calcutta

Remote

What You'll Do Build AI/ML technology stacks from concept to production, including data pipelines, model training, and deployment. Develop and optimize Generative AI workflows, including prompt engineering, fine-tuning (LoRA, QLoRA), retrieval-augmented generation (RAG), and LLM-based applications. Work with Large Language Models (LLMs) such as Llama, Mistral, and GPT, ensuring efficient adaptation for various use cases. Design and implement AI-driven automation using agentic AI systems and orchestration frameworks like Autogen, LangGraph, and CrewAI. Leverage cloud AI infrastructure (AWS, Azure, GCP) for scalable deployment and performance tuning. Collaborate with cross-functional teams to deliver AI-driven solutions. Job Types: Part-time, Contractual / Temporary Contract length: 2 months Pay: From ₹25,000.00 per month Expected hours: 40 per week Schedule: Day shift Work Location: Remote

Posted 4 days ago

Apply

1.0 years

3 - 5 Lacs

Jaipur

On-site

Job Location : Jaipur / Prayagraj Position : DevOps Engineer Qualification : BE / BTech / MCA / BCA Job Type : Full- Time (On-Site) Work Experience : 1 to 5 years as DevOps Engineer Expected Start Date : 16th August 2025 Compensation : Best in the Industry Other Benefits : Yearly Bonus + Health Insurance + Provident Fund Skillsets Required : CI/CD tools: Jenkins Containerization: Docker Container orchestration: Kubernetes and Openshift Cloud platforms: Private Cloud Operating System : Linux Language : Python DevSecOps practices • Strong problem-solving and automation skills Job Type: Full-time Pay: ₹300,000.00 - ₹550,000.00 per year Benefits: Health insurance Provident Fund Schedule: Day shift Morning shift Supplemental Pay: Yearly bonus Ability to commute/relocate: Jaipur, Rajasthan: Reliably commute or planning to relocate before starting work (Preferred) Willingness to travel: 25% (Preferred) Work Location: In person Application Deadline: 07/08/2025 Expected Start Date: 15/08/2025

Posted 4 days ago

Apply

4.0 - 8.0 years

10 - 12 Lacs

Jaipur

On-site

Job Summary As a Go Lang Developer, you will be responsible for designing, developing, and maintaining efficient, reusable, and reliable Go code. You will work closely with cross-functional teams to create scalable back-end solutions, including APIs and microservices, ensuring that they are robust and secure. Experience: 4 to 8 years 2-6 Years (If B.E. / B.Tech from premier institutes eg: IITs / NITs, etc) Have worked with Start-up or Product Based Company {Preferred} Experience Required Must Have: o Minimum 2 years of working experience in Go Lang development. Proven experience in developing RESTful APIs and microservices. Experience with concurrency and writing highly scalable, high-performance applications. Proficiency in database design and working with both SQL and NoSQL databases. Desired to Have Experience with containerization (Docker) and orchestration (Kubernetes). Familiarity of cloud platforms like AWS, GCP, or Azure. o Familiarity with CI/CD pipelines and DevOps practices. Specific Skills Must Have: o Strong proficiency in Go Lang and good understanding of its paradigms. Familiarity with version control tools like Git. Strong understanding of software development principles, including SOLID principles and design patterns. Good understanding of network protocols (HTTP, TCP/IP, Web Sockets). Desired to Have Knowledge of front-end technologies such as JavaScript, HTML, React Js and CSS. Experience with testing frameworks and writing unit/integration tests. Strong analytical and problem-solving skills. o Excellent teamwork skills. Job Description Develop and maintain server-side applications using Go Lang. Design and implement scalable, secure, and maintainable RESTful APIs and microservices. Collaborate with front-end developers to integrate user-facing elements with serverside logic. Optimize applications for performance, reliability, and scalability. Write clean, efficient, and reusable code that adheres to best practices. Troubleshoot and debug applications, addressing issues proactively. Participate in code reviews to maintain code quality and share knowledge within the team. Work closely with DevOps teams to ensure smooth deployment and continuous integration of services. Maintain comprehensive documentation for all services and code written. Stay up-to-date with industry trends and best practices, continuously enhancing skills and knowledge. This role requires a proactive individual who is passionate about technology and has a strong foundation in Go Lang development, along with the ability to work collaboratively in a dynamic and fast-paced environment. Perks:*Lucerative Incentives* Apply Now : bit.ly/KDKHR Job Types: Full-time, Permanent Pay: ₹1,000,000.00 - ₹1,200,000.00 per year Benefits: Paid sick time Paid time off Provident Fund Ability to commute/relocate: Jaipur, Rajasthan: Reliably commute or willing to relocate with an employer-provided relocation package (Preferred) Experience: Golang development: 2 years (Preferred) Location: Jaipur, Rajasthan (Preferred) Work Location: In person

Posted 4 days ago

Apply

8.0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

Req ID: 322811 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Full Stack-AI Developer to join our team in Bangalore, Karnātaka (IN-KA), India (IN). 3–8 years of strong hands-on experience in software development, with a focus on AI/ML and Generative AI Hands-on with Generative AI technologies with at least one of the following experiences: Working with Large Language Models (LLMs) such as GPT, LLaMA, Claude, etc. Building intelligent systems using LangGraph, Agentic AI frameworks, or similar orchestration tools Implementing Retrieval-Augmented Generation (RAG), prompt engineering, and knowledge augmentation techniques Proficiency in Python, including experience with data processing, API integration, and automation scripting Demonstrated experience in end-to-end SDLC (Software Development Life Cycle): requirement gathering, design, development, testing, deployment, and support Proficient in CI/CD pipelines and version control systems like Git Experience with containerization technologies such as Docker, and orchestration using Kubernetes Strong problem-solving and debugging skills, with an ability to write clean, efficient, and maintainable code Excellent verbal and written communication skills, with the ability to collaborate effectively across technical and business teams About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here . If you'd like more information on your EEO rights under the law, please click here . For Pay Transparency information, please click here .

Posted 4 days ago

Apply

2.0 years

0 Lacs

Andhra Pradesh

On-site

We are seeking an experienced and innovative Generative AI Developer to join our AWAC team. In this role, you will lead the design and development of GenAI and Agentic AI applications using state of the art LLMs and AWS native services. You will work on both R&D focused proofof concepts and production grade implementations, collaborating with cross-functional teams to bring intelligent, scalable solutions to life. Key Responsibilities Design, develop, and deploy Generative AI and Agentic AI applications using LLMs such as Claude, Cohere, Titan, and others. Lead the development of proof of concept (PoC) solutions to explore new use cases and validate AI driven innovations. Architect and implement retrieval augmented generation (RAG) pipelines using LangChain and Vector Databases like OpenSearch. Integrate with AWS services including Bedrock API, SageMaker, SageMaker JumpStart, Lambda, EKS/ECS, Amazon Connect, Amazon Q. Apply few shot, one shot, and zero shot learning techniques to fine tune and prompt LLMs effectively. Collaborate with data scientists, ML engineers, and business stakeholders to translate complex requirements into scalable AI solutions. Implement CI/CD pipelines, infrastructure as code using Terraform, and follow DevOps best practices. Optimize performance, cost, and reliability of AI applications in production environments. Document architecture, workflows, and best practices to support knowledge sharing and onboarding. Required Skills & Technologies Experience in Python development, with at least 2 years in AI/ML or GenAI projects. Strong hands on experience with LLMs and Generative AI frameworks. Proficiency in LangChain, Vector DBs (e.g OpenSearch), and prompt engineering. Deep understanding of AWS AI/ML ecosystem: Bedrock, SageMaker, Lambda, EKS/ECS. Experience with serverless architectures, containerization, and cloud native development. Familiarity with DevOps tools: Git, CI/CD, Terraform. Strong debugging, performance tuning, and problem solving skills. Preferred Qualifications Experience with Amazon Q, Amazon Connect, or Amazon Titan. Familiarity with Claude, Cohere, or other foundation models. Bachelors or Master s degree in Computer Science, AI/ML, or a related field. Experience in building agentic workflows and multi agent orchestration is a plus. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 4 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Introduction IBM Infrastructure is a catalyst that makes the world work better because our clients demand it. Heterogeneous environments, the explosion of data, digital automation, and cybersecurity threats require hybrid cloud infrastructure that only IBM can provide. Your ability to be creative, a forward-thinker and to focus on innovation that matters, is all support by our growth minded culture as we continue to drive career development across our teams. Collaboration is key to IBM Infrastructure success, as we bring together different business units and teams that balance their priorities in a way that best serves our client's needs. IBM's product and technology landscape includes Research, Software, and Infrastructure. Entering this domain positions you at the heart of IBM, where growth and innovation thrive. IBM Cloud Core Platform Services is a growing, agile, dynamic organization building and operating leading-edge, highly available, and distributed cloud services in IBM Cloud. We're looking for experienced cloud software engineers to join us. This technical role is focused on designing, developing and deploying cloud services, automating wide ranges of tasks, problem-solving, interfacing with other teams to solve complex problems. You will be part of a strong, agile, and modern team culture driven to create world-class cloud services, delivering an industry leading user experience for our customers. As an integral part of the development team, you will get an opportunity to contribute to the cloud services architecture and design while helping us mentor the next generation of cloud engineers. Your Role And Responsibilities Becoming an expert and major contributor for designs and implementation efforts of the IBM Cloud Platform Services ecosystem Developing highly available, distributed cloud services, with emphasis on security, scalability and user experience using technologies like Golang, Java, Node.js, Cloudant, Redis, Docker, Kubernetes, Istio and more. Reading open specifications and RFC documents and converting them to design docs and implementation Identifying opportunities and acting on improving existing tools, frameworks and workflows Documenting and sharing your experience with team members, mentoring others Preferred Education Bachelor's Degree Required Technical And Professional Expertise A minimum of a bachelor’s degree in computer science, Software Engineering or equivalent At least 3 years of hands-on development experience building applications with one or more of the following: Java, Node.js, Golang, NoSQL DB, Redis, distributed caches, containers etc. At least 3 years of experience building and operating highly secured, distributed cloud services with one or more of the following: IBM Cloud, AWS, Azure, Docker, container orchestration, performance testing, DevOps etc. At least 1 years of experience in web technologies: HTTP, REST, JSON, HTML, JavaScript etc. Solid understanding of the micro-service architecture and modern cloud programming practices. Strong ability to design a clean, developer-friendly API. Passionate about constant, continuous learning and applying new technologies as well as mentoring others. Keen troubleshooting skills and strong verbal/written communication skills. Preferred Technical And Professional Experience Bachealor's degree in computer science, Software Engineering or equivalent Knowledge of the IBM Cloud platforms or another as-a-service platform and its architecture Experience as technical lead managing team of engineers in driving development of highly scalable distributed system Proficient with one or more project management tools – Jira, Git, Aha, etc

Posted 4 days ago

Apply

1.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Introduction Ready to grow your career in the Cloud? Want to have a feeling that you are making a difference? This is your chance to become an integral part of a dynamic team of talented professionals developing and deploying innovative, industry-leading, cloud-based platform services. IBM Cloud Platform Core Services is a growing, agile, dynamic organization building and operating leading-edge, highly-available, and distributed cloud services in IBM Cloud. We're looking for experienced cloud software engineers to join us. This technical role is focused on designing, developing and deploying cloud services, automating wide ranges of tasks, problem-solving, interfacing with other teams to solve complex problems. You will be part of a strong, agile, and modern team culture driven to create world-class cloud services, delivering an industry leading user experience for our customers. As an integral part of the development team, you will get an opportunity to contribute to the cloud services architecture and design while helping us mentor the next generation of cloud engineers. Your Role And Responsibilities Becoming an expert and major contributor for designs and implementation efforts of the IBM Cloud Platform Services ecosystem Developing highly-available, distributed cloud services, with emphasis on security, scalability and user experience using technologies like Java, Node.js, Golang, Cloudant, Redis, Docker, Kubernetes, Istio and more. Identifying opportunities and acting on improving existing tools, frameworks and workflows Documenting and sharing your experience with team members, mentoring others Preferred Education Bachelor's Degree Required Technical And Professional Expertise A minimum of a bachelor degree in Computer Science, Software Engineering or equivalent 1+ year of hands-on development experience building applications with one or more of the following: Java, Node.js, Golang, NoSQL DB, Redis, distributed caches, containers etc. 1+ years of experience in web technologies: HTTP, REST, JSON, HTML, JavaScript etc. Solid understanding of the micro-service architecture and modern cloud programming practices. Strong ability to design a clean, developer-friendly API. Passionate about constant, continuous learning and applying new technologies as well as mentoring others. Keen troubleshooting skills and strong verbal/written communication skills. Preferred Technical And Professional Experience Bachelors degree in Computer Science, Software Engineering or equivalent Understanding of cybersecurity and cryptography principles, certifications, and compliance. Experience in remotely supporting customer engagements to help driving the adoption Experience building and operating highly secured, distributed cloud services with one or more of the following: IBM Cloud, AWS, Azure, Docker, container orchestration, performance testing, DevOps etc.

Posted 4 days ago

Apply

3.0 years

0 Lacs

Kanayannur, Kerala, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics D&A – SSIS- Senior We’re looking for Informatica or SSIS Engineers with Cloud Background (AWS, Azure) Primary skills: Has played key roles in multiple large global transformation programs on business process management Experience in database query using SQL Should have experience working on building/integrating data into a data warehouse. Experience in data profiling and reconciliation Informatica PowerCenter/IBM-DataStage/ SSIS development Strong proficiency in SQL/PLSQL Good experience in performance tuning ETL workflows and suggest improvements. Developed expertise in complex data management or Application integration solution and deployment in areas of data migration, data integration, application integration or data quality. Experience in data processing, orchestration, parallelization, transformations and ETL Fundamentals. Leverages on variety of programming languages & data crawling/processing tools to ensure data reliability, quality & efficiency (optional) Experience in Cloud Data-related tool (Microsoft Azure, Amazon S3 or Data lake) Knowledge on Cloud infrastructure and knowledge on Talend cloud is an added advantage Knowledge of data modelling principles. Knowledge in Autosys scheduling Good experience in database technologies. Good knowledge in Unix system Responsibilities: Need to work as a team member to contribute in various technical streams of Data integration projects. Provide product and design level technical best practices Interface and communicate with the onsite coordinators Completion of assigned tasks on time and regular status reporting to the lead Building a quality culture Use an issue-based approach to deliver growth, market and portfolio strategy engagements for corporates Strong communication, presentation and team building skills and experience in producing high quality reports, papers, and presentations. Experience in executing and managing research and analysis of companies and markets, preferably from a commercial due diligence standpoint. Qualification: BE/BTech/MCA (must) with an industry experience of 3 -7 years. Experience in Talend jobs, joblets and customer components. Should have knowledge of error handling and performance tuning in Talend. Experience in big data technologies such as sqoop, Impala, hive, Yarn, Spark etc. Informatica PowerCenter/IBM-DataStage/ SSIS development Strong proficiency in SQL/PLSQL Good experience in performance tuning ETL workflows and suggest improvements. Atleast experience of minimum 3-4 clients for short duration projects ranging between 6-8 + months OR Experience of minimum 2+ clients for duration of projects ranging between 1-2 years or more than that People with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 4 days ago

Apply

6.0 years

0 Lacs

Kanayannur, Kerala, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Data Integration Specialist – Senior The opportunity We are seeking a talented and experienced Integration Specialist with 3–6 years of experience to join our growing Digital Integration team. The ideal candidate will play a pivotal role in designing, building, and deploying scalable and secure solutions that support business transformation, system integration, and automation initiatives across the enterprise. Your Key Responsibilities Work with clients to assess existing integration landscapes and recommend modernization strategies using MuleSoft. Translate business requirements into technical designs, reusable APIs, and integration patterns. Develop, deploy, and manage MuleSoft APIs and integrations on Anypoint Platform (CloudHub, Runtime Fabric, Hybrid). Collaborate with business and IT stakeholders to define integration standards, SLAs, and governance models. Implement error handling, logging, monitoring, and alerting using Anypoint Monitoring and third-party tools. Maintain integration artifacts and documentation, including RAML specifications, flow diagrams, and interface contracts. Ensure performance tuning, scalability, and security best practices are followed across integration solutions. Support CI/CD pipelines, version control, and DevOps processes for MuleSoft assets using platforms like Azure DevOps or GitLab. Collaborate with cross-functional teams (Salesforce, SAP, Data, Cloud, etc.) to deliver end-to-end connected solutions. Stay current with MuleSoft platform capabilities and industry integration trends to recommend improvements and innovations. Troubleshoot integration issues and perform root cause analysis in production and non-production environments. Contribute to internal knowledge-sharing, technical mentoring, and process optimization. Strong SQL, data integration and handling skills Exposure to AI Models ,Python and using them in Data Cleaning/Standardization. To qualify for the role, you must have 3–6 years of hands-on experience with MuleSoft Anypoint Platform and Anypoint Studio Strong experience with API-led connectivity and reusable API design (System, Process, Experience layers). Proficient in DataWeave transformations, flow orchestration, and integration best practices. Experience with API lifecycle management including design, development, publishing, governance, and monitoring. Solid understanding of integration patterns (synchronous, asynchronous, event-driven, batch). Hands-on experience with security policies, OAuth, JWT, client ID enforcement, and TLS. Experience in working with cloud platforms (Azure, AWS, or GCP) in the context of integration projects. Knowledge of performance tuning, capacity planning, and error handling in MuleSoft integrations. Experience in DevOps practices including CI/CD pipelines, Git branching strategies, and automated deployments. Experience in data intelligence cloud platforms like Snowflake, Azure, data bricks Ideally, you’ll also have MuleSoft Certified Developer or Integration Architect certification. Exposure to monitoring and logging tools (e.g., Splunk, Elastic, Anypoint Monitoring). Strong communication and interpersonal skills to work with technical and non-technical stakeholders. Ability to document integration requirements, user stories, and API contracts clearly and concisely. Experience in agile environments and comfort working across multiple concurrent projects. Ability to mentor junior developers and contribute to reusable component libraries and coding standards. What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career. The freedom and flexibility to handle your role in a way that’s right for you. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 4 days ago

Apply

15.0 years

0 Lacs

Kochi, Kerala, India

On-site

Introduction Joining the IBM Technology Expert Labs teams means you’ll have a career delivering world-class services for our clients. As the ultimate expert in IBM products, you’ll bring together all the necessary technology and services to help customers solve their most challenging problems. Working in IBM Technology Expert Labs means accelerating the time to value confidently and ensuring speed and insight while our clients focus on what they do best—running and growing their business. Excellent onboarding and industry-leading learning culture will set you up for a positive impact, while advancing your career. Our culture is collaborative and experiential. As part of a team, you will be surrounded by bright minds and keen co-creators—always willing to help and be helped—as you apply passion to work that will positively impact the world around us. Your Role And Responsibilities As a Delivery Consultant, you will work closely with IBM clients and partners to design, deliver, and optimize IBM Technology solutions that align with your clients’ goals. In this role, you will apply your technical expertise to ensure world-class delivery while leveraging your consultative skills such as problem-solving issue- / hypothesis-based methodologies, communication, and service orientation skills. As a member of IBM Technology Expert Labs, a team that is client focused, courageous, pragmatic, and technical, you’ll collaborate with clients to optimize and trailblaze new solutions that address real business challenges. If you are passionate about success with both your career and solving clients’ business challenges, this role is for you. To help achieve this win-win outcome, a ‘day-in-the-life’ of this opportunity may include, but not be limited to… Solving Client Challenges Effectively: Understanding clients’ main challenges and developing solutions that helps them reach true business value by working thru the phases of design, development integration, implementation, migration and product support with a sense of urgency . Agile Planning and Execution: Creating and executing agile plans where you are responsible for installing and provisioning, testing, migrating to production, and day-two operations. Technical Solution Workshops: Conducting and participating in technical solution workshops. Building Effective Relationships: Developing successful relationships at all levels —from engineers to CxOs—with experience of navigating challenging debate to reach healthy resolutions. Self-Motivated Problem Solver: Demonstrating a natural bias towards self-motivation, curiosity, initiative in addition to navigating data and people to find answers and present solutions. Collaboration and Communication: Strong collaboration and communication skills as you work across the client, partner, and IBM team. Preferred Education Bachelor's Degree Required Technical And Professional Expertise In-depth knowledge of the IBM Data & AI portfolio. 15+ years of experience in software services 10+ years of experience in the planning, design, and delivery of one or more products from the IBM Data Integration, IBM Data Intelligence product platforms Experience in designing and implementing solution on IBM Cloud Pak for Data, IBM DataStage Nextgen, Orchestration Pipelines 10+ years’ experience with ETL and database technologies, Experience in architectural planning and implementation for the upgrade/migration of these specific products Experience in designing and implementing Data Quality solutions Experience with installation and administration of these products Excellent understanding of cloud concepts and infrastructure Excellent verbal and written communication skills are essential Preferred Technical And Professional Experience Experience with any of DataStage, Informatica, SAS, Talend products Experience with any of IKC, IGC,Axon Experience with programming languages like Java/Python Experience in AWS, Azure Google or IBM cloud platform Experience with Redhat OpenShift Good to have Knowledge: Apache Spark , Shell scripting, GitHub, JIRA

Posted 4 days ago

Apply

8.0 years

0 Lacs

Kanayannur, Kerala, India

On-site

At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. EY-Consulting - Data and Analytics – Manager - Data Integration Architect – Medidata Platform Integration EY's Consulting Services is a unique, industry-focused business unit that provides a broad range of integrated services that leverage deep industry experience with strong functional and technical capabilities and product knowledge. EY’s financial services practice provides integrated Consulting services to financial institutions and other capital markets participants, including commercial banks, retail banks, investment banks, broker-dealers & asset management firms, and insurance firms from leading Fortune 500 Companies. Within EY’s Consulting Practice, Data and Analytics team solves big, complex issues and capitalize on opportunities to deliver better working outcomes that help expand and safeguard the businesses, now and in the future. This way we help create a compelling business case for embedding the right analytical practice at the heart of client’s decision-making. The opportunity We’re looking for an experienced Data Integration Architect with 8+ years in clinical or life sciences domains to lead the integration of Medidata platforms into enterprise clinical trial systems. This role offers the chance to design scalable, compliant data integration solutions, collaborate across global R&D systems, and contribute to data-driven innovation in the healthcare and life sciences space. You will play a key role in aligning integration efforts with organizational architecture and compliance standards while engaging with stakeholders to ensure successful project delivery. Your Key Responsibilities Design and implement scalable integration solutions for large-scale clinical trial systems involving Medidata platforms. Ensure integration solutions comply with regulatory standards such as GxP and CSV. Establish and maintain seamless system-to-system data exchange using middleware platforms (e.g., Apache Kafka, Informatica) or direct API interactions. Collaborate with cross-functional business and IT teams to gather integration requirements and translate them into technical specifications. Align integration strategies with enterprise architecture and data governance frameworks. Provide support to program management through data analysis, integration status reporting, and risk assessment contributions. Interface with global stakeholders to ensure smooth integration delivery and resolve technical challenges. Mentor junior team members and contribute to knowledge sharing and internal learning initiatives. Participate in architectural reviews and provide recommendations for continuous improvement and innovation in integration approaches. Support business development efforts by contributing to solution proposals, proof of concepts (POCs), and client presentations. Skills And Attributes For Success Use a solution-driven approach to design and implement compliant integration strategies for clinical data platforms like Medidata. Strong communication, stakeholder engagement, and documentation skills, with experience presenting complex integration concepts clearly. Proven ability to manage system-to-system data flows using APIs or middleware, ensuring alignment with enterprise architecture and regulatory standards To qualify for the role, you must have Experience: Minimum 8 years in data integration or architecture roles, with a strong preference for experience in clinical research or life sciences domains. Education: Must be a graduate preferrable BE/B.Tech/BCA/Bsc IT Technical Skills: Hands-on expertise in one or more integration platforms such as Apache Kafka, Informatica, or similar middleware technologies; experience in implementing API-based integrations. Domain Knowledge: In-depth understanding of clinical trial data workflows, integration strategies, and regulatory frameworks including GxP and CSV compliance. Soft Skills: Strong analytical thinking, effective communication, and stakeholder management skills with the ability to collaborate across business and technical teams. Additional Attributes: Ability to work independently in a fast-paced environment, lead integration initiatives, and contribute to solution design and architecture discussions. Ideally, you’ll also have Hands-on experience with ETL tools and clinical data pipeline orchestration frameworks. Familiarity with broader clinical R&D platforms such as Oracle Clinical, RAVE, or other EDC systems. Prior experience leading small integration teams and working directly with cross-functional stakeholders in regulated environments What We Look For A Team of people with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment An opportunity to be a part of market-leading, multi-disciplinary team of 1400 + professionals, in the only integrated global transaction business worldwide. Opportunities to work with EY Consulting practices globally with leading businesses across a range of industries What working at EY offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you About EY As a global leader in assurance, tax, transaction and Consulting services, we’re using the finance products, expertise and systems we’ve developed to build a better working world. That starts with a culture that believes in giving you the training, opportunities and creative freedom to make things better. Whenever you join, however long you stay, the exceptional EY experience lasts a lifetime. And with a commitment to hiring and developing the most passionate people, we’ll make our ambition to be the best employer by 2020 a reality. If you can confidently demonstrate that you meet the criteria above, please contact us as soon as possible. Join us in building a better working world. Apply now EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.

Posted 4 days ago

Apply

0 years

0 Lacs

Andhra Pradesh, India

On-site

Design and develop robust ETL pipelines using Python, PySpark, and GCP services. Build and optimize data models and queries in BigQuery for analytics and reporting. Ingest, transform, and load structured and semi-structured data from various sources. Collaborate with data analysts, scientists, and business teams to understand data requirements. Ensure data quality, integrity, and security across cloud-based data platforms. Monitor and troubleshoot data workflows and performance issues. Automate data validation and transformation processes using scripting and orchestration tools. Required Skills & Qualifications Hands-on experience with Google Cloud Platform (GCP), especially BigQuery. Strong programming skills in Python and/or PySpark. Experience in designing and implementing ETL workflows and data pipelines. Proficiency in SQL and data modeling for analytics. Familiarity with GCP services such as Cloud Storage, Dataflow, Pub/Sub, and Composer. Understanding of data governance, security, and compliance in cloud environments. Experience with version control (Git) and agile development practices.

Posted 4 days ago

Apply

8.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. EY-Consulting - Data and Analytics – Manager - Data Integration Architect – Medidata Platform Integration EY's Consulting Services is a unique, industry-focused business unit that provides a broad range of integrated services that leverage deep industry experience with strong functional and technical capabilities and product knowledge. EY’s financial services practice provides integrated Consulting services to financial institutions and other capital markets participants, including commercial banks, retail banks, investment banks, broker-dealers & asset management firms, and insurance firms from leading Fortune 500 Companies. Within EY’s Consulting Practice, Data and Analytics team solves big, complex issues and capitalize on opportunities to deliver better working outcomes that help expand and safeguard the businesses, now and in the future. This way we help create a compelling business case for embedding the right analytical practice at the heart of client’s decision-making. The opportunity We’re looking for an experienced Data Integration Architect with 8+ years in clinical or life sciences domains to lead the integration of Medidata platforms into enterprise clinical trial systems. This role offers the chance to design scalable, compliant data integration solutions, collaborate across global R&D systems, and contribute to data-driven innovation in the healthcare and life sciences space. You will play a key role in aligning integration efforts with organizational architecture and compliance standards while engaging with stakeholders to ensure successful project delivery. Your Key Responsibilities Design and implement scalable integration solutions for large-scale clinical trial systems involving Medidata platforms. Ensure integration solutions comply with regulatory standards such as GxP and CSV. Establish and maintain seamless system-to-system data exchange using middleware platforms (e.g., Apache Kafka, Informatica) or direct API interactions. Collaborate with cross-functional business and IT teams to gather integration requirements and translate them into technical specifications. Align integration strategies with enterprise architecture and data governance frameworks. Provide support to program management through data analysis, integration status reporting, and risk assessment contributions. Interface with global stakeholders to ensure smooth integration delivery and resolve technical challenges. Mentor junior team members and contribute to knowledge sharing and internal learning initiatives. Participate in architectural reviews and provide recommendations for continuous improvement and innovation in integration approaches. Support business development efforts by contributing to solution proposals, proof of concepts (POCs), and client presentations. Skills And Attributes For Success Use a solution-driven approach to design and implement compliant integration strategies for clinical data platforms like Medidata. Strong communication, stakeholder engagement, and documentation skills, with experience presenting complex integration concepts clearly. Proven ability to manage system-to-system data flows using APIs or middleware, ensuring alignment with enterprise architecture and regulatory standards To qualify for the role, you must have Experience: Minimum 8 years in data integration or architecture roles, with a strong preference for experience in clinical research or life sciences domains. Education: Must be a graduate preferrable BE/B.Tech/BCA/Bsc IT Technical Skills: Hands-on expertise in one or more integration platforms such as Apache Kafka, Informatica, or similar middleware technologies; experience in implementing API-based integrations. Domain Knowledge: In-depth understanding of clinical trial data workflows, integration strategies, and regulatory frameworks including GxP and CSV compliance. Soft Skills: Strong analytical thinking, effective communication, and stakeholder management skills with the ability to collaborate across business and technical teams. Additional Attributes: Ability to work independently in a fast-paced environment, lead integration initiatives, and contribute to solution design and architecture discussions. Ideally, you’ll also have Hands-on experience with ETL tools and clinical data pipeline orchestration frameworks. Familiarity with broader clinical R&D platforms such as Oracle Clinical, RAVE, or other EDC systems. Prior experience leading small integration teams and working directly with cross-functional stakeholders in regulated environments What We Look For A Team of people with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment An opportunity to be a part of market-leading, multi-disciplinary team of 1400 + professionals, in the only integrated global transaction business worldwide. Opportunities to work with EY Consulting practices globally with leading businesses across a range of industries What working at EY offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you About EY As a global leader in assurance, tax, transaction and Consulting services, we’re using the finance products, expertise and systems we’ve developed to build a better working world. That starts with a culture that believes in giving you the training, opportunities and creative freedom to make things better. Whenever you join, however long you stay, the exceptional EY experience lasts a lifetime. And with a commitment to hiring and developing the most passionate people, we’ll make our ambition to be the best employer by 2020 a reality. If you can confidently demonstrate that you meet the criteria above, please contact us as soon as possible. Join us in building a better working world. Apply now EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.

Posted 4 days ago

Apply

2.0 years

0 Lacs

Andhra Pradesh, India

On-site

We are seeking an experienced and innovative Generative AI Developer to join our AWAC team. In this role, you will lead the design and development of GenAI and Agentic AI applications using state of the art LLMs and AWS native services. You will work on both R&D focused proofof concepts and production grade implementations, collaborating with cross-functional teams to bring intelligent, scalable solutions to life. Key Responsibilities Design, develop, and deploy Generative AI and Agentic AI applications using LLMs such as Claude, Cohere, Titan, and others. Lead the development of proof of concept (PoC) solutions to explore new use cases and validate AI driven innovations. Architect and implement retrieval augmented generation (RAG) pipelines using LangChain and Vector Databases like OpenSearch. Integrate with AWS services including Bedrock API, SageMaker, SageMaker JumpStart, Lambda, EKS/ECS, Amazon Connect, Amazon Q. Apply few shot, one shot, and zero shot learning techniques to fine tune and prompt LLMs effectively. Collaborate with data scientists, ML engineers, and business stakeholders to translate complex requirements into scalable AI solutions. Implement CI/CD pipelines, infrastructure as code using Terraform, and follow DevOps best practices. Optimize performance, cost, and reliability of AI applications in production environments. Document architecture, workflows, and best practices to support knowledge sharing and onboarding. Required Skills & Technologies Experience in Python development, with at least 2 years in AI/ML or GenAI projects. Strong hands on experience with LLMs and Generative AI frameworks. Proficiency in LangChain, Vector DBs (e.g OpenSearch), and prompt engineering. Deep understanding of AWS AI/ML ecosystem: Bedrock, SageMaker, Lambda, EKS/ECS. Experience with serverless architectures, containerization, and cloud native development. Familiarity with DevOps tools: Git, CI/CD, Terraform. Strong debugging, performance tuning, and problem solving skills. Preferred Qualifications Experience with Amazon Q, Amazon Connect, or Amazon Titan. Familiarity with Claude, Cohere, or other foundation models. Bachelors or Master s degree in Computer Science, AI/ML, or a related field. Experience in building agentic workflows and multi agent orchestration is a plus.

Posted 4 days ago

Apply

6.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Data Integration Specialist – Senior The opportunity We are seeking a talented and experienced Integration Specialist with 3–6 years of experience to join our growing Digital Integration team. The ideal candidate will play a pivotal role in designing, building, and deploying scalable and secure solutions that support business transformation, system integration, and automation initiatives across the enterprise. Your Key Responsibilities Work with clients to assess existing integration landscapes and recommend modernization strategies using MuleSoft. Translate business requirements into technical designs, reusable APIs, and integration patterns. Develop, deploy, and manage MuleSoft APIs and integrations on Anypoint Platform (CloudHub, Runtime Fabric, Hybrid). Collaborate with business and IT stakeholders to define integration standards, SLAs, and governance models. Implement error handling, logging, monitoring, and alerting using Anypoint Monitoring and third-party tools. Maintain integration artifacts and documentation, including RAML specifications, flow diagrams, and interface contracts. Ensure performance tuning, scalability, and security best practices are followed across integration solutions. Support CI/CD pipelines, version control, and DevOps processes for MuleSoft assets using platforms like Azure DevOps or GitLab. Collaborate with cross-functional teams (Salesforce, SAP, Data, Cloud, etc.) to deliver end-to-end connected solutions. Stay current with MuleSoft platform capabilities and industry integration trends to recommend improvements and innovations. Troubleshoot integration issues and perform root cause analysis in production and non-production environments. Contribute to internal knowledge-sharing, technical mentoring, and process optimization. Strong SQL, data integration and handling skills Exposure to AI Models ,Python and using them in Data Cleaning/Standardization. To qualify for the role, you must have 3–6 years of hands-on experience with MuleSoft Anypoint Platform and Anypoint Studio Strong experience with API-led connectivity and reusable API design (System, Process, Experience layers). Proficient in DataWeave transformations, flow orchestration, and integration best practices. Experience with API lifecycle management including design, development, publishing, governance, and monitoring. Solid understanding of integration patterns (synchronous, asynchronous, event-driven, batch). Hands-on experience with security policies, OAuth, JWT, client ID enforcement, and TLS. Experience in working with cloud platforms (Azure, AWS, or GCP) in the context of integration projects. Knowledge of performance tuning, capacity planning, and error handling in MuleSoft integrations. Experience in DevOps practices including CI/CD pipelines, Git branching strategies, and automated deployments. Experience in data intelligence cloud platforms like Snowflake, Azure, data bricks Ideally, you’ll also have MuleSoft Certified Developer or Integration Architect certification. Exposure to monitoring and logging tools (e.g., Splunk, Elastic, Anypoint Monitoring). Strong communication and interpersonal skills to work with technical and non-technical stakeholders. Ability to document integration requirements, user stories, and API contracts clearly and concisely. Experience in agile environments and comfort working across multiple concurrent projects. Ability to mentor junior developers and contribute to reusable component libraries and coding standards. What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career. The freedom and flexibility to handle your role in a way that’s right for you. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 4 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies