Jobs
Interviews

42 Codepipeline Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 5.0 years

0 Lacs

pune, maharashtra, india

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Operations Management Level Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes forour clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences foreach other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firms growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Summary: A career within our Cloud & Infrastructure team will provide you with the opportunity to help clients design, build, and operate modern, scalable, and secure cloud platforms. As an AWS DevOps / SRE Engineer, youll be part of a collaborative team responsible for enabling infrastructure automation, operational reliability, and cloud best practices, while supporting mission-critical workloads across AWS environments. Responsibilities: Design, implement, and manage scalable, secure, and highly available infrastructure on AWS. Build and maintain CI/CD pipelines using tools such as CodePipeline, GitHub Actions, Jenkins, or similar. Implement Infrastructure as Code (IaC) using Terraform or CloudFormation. Monitor system performance, uptime, and reliability using CloudWatch and other observability tools. Handle incident response, troubleshooting, and root cause analysis across cloud environments. Collaborate with cross-functional teams including developers, architects, and security engineers. Support ITSM workflows through systems like ServiceNow or Jira for incident and change management. Follow cloud security and networking best practices across VPC, subnets, SGs, NACLs, firewalls, DNS, and Load Balancers. Contribute to automation of operational tasks using scripting languages such as Bash, PowerShell, or Python. Stay current with emerging cloud technologies relevant to DevOps and platform engineering. Mandatory skill sets: Minimum 3 years of IT experience with at least 3+ years of hands-on experience in AWS cloud operations. Strong expertise in AWS core services (EC2, S3, RDS, IAM, VPC, Lambda, CloudWatch, etc.). Proficiency in scripting and automation (Bash, PowerShell, Python). Solid experience in Infrastructure as Code (Terraform preferred). Sound knowledge of networking and security fundamentals in cloud environments. Experience with DevOps tools and CI/CD processes. Experience with ITSM platforms like ServiceNow or Jira. Excellent troubleshooting, analytical, and communication skills. Preferred skill sets: Familiarity with Azure or GCP cloud platforms. Certifications in AWS or Azure (e.g., SysOps Admin, Solutions Architect, Azure Admin). Knowledge of containerization tools (Docker, ECS, EKS). Years of experience required: 3 to 4 years (Associate level) Education qualification: Bachelors degree in Computer Science, Information Technology, or a related field. Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, MBA (Master of Business Administration) Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills AWS Devops Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis, Intellectual Curiosity, Java (Programming Language), Market Development + 11 more Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship No Government Clearance Required No Job Posting End Date Show more Show less

Posted 16 hours ago

Apply

4.0 - 6.0 years

0 Lacs

pune, maharashtra, india

On-site

Role Summary We are seeking a highly skilled and customer-focused AWS Senior Solutions Engineer & Technical Account Manager (TAM) to join our AWS Technical Support team. This hybrid role blends deep technical expertise with strategic client engagement, enabling the successful candidate to lead complex cloud projects, provide advanced support, and act as a trusted advisor to enterprise customers. You will be responsible for delivering high-quality AWS solutions, managing technical relationships, and ensuring customer success through proactive guidance, architectural best practices, and operational excellence. Key Duties And Responsibilities Working in close collaboration with the Global Support Manager and wider in-country AWS teams, the core responsibilities of the role include, but not limited to, the following: Technical Leadership & Project Delivery Lead the design, implementation, and optimization of AWS-based solutions across compute, storage, networking, and application services. Deliver Infrastructure as Code (IaC) using tools such as Terraform, CloudFormation, and Serverless Framework. Conduct AWS Well-Architected Framework Reviews (WAFR), Optimization and Licensing Assessments (OLA), and support Migration Acceleration Program (MAP) engagements. Ensure successful handover to operations through documentation, runbooks, and knowledge transfer. Customer Engagement & Account Management Serve as the primary technical point of contact for assigned AWS customers, building strong relationships with stakeholders. Understand customer goals and challenges, and align AWS solutions to drive business outcomes. Provide proactive guidance on architecture, cost optimization, security, and operational best practices. Lead customer workshops, technical reviews, and roadmap planning sessions. Advanced Support & Troubleshooting Provide expert-level support for AWS services including EC2, S3, RDS, Lambda, VPC, IAM, and CloudWatch. Troubleshoot complex infrastructure and application issues, ensuring minimal downtime and rapid resolution. Participate in on-call rotations and manage escalations for critical incidents. Conduct root cause analysis and implement long-term solutions to recurring issues. Pre-Sales & Solution Scoping Support pre-sales activities including scoping calls, workshops, and Statement of Work (SoW) development. Identify opportunities for service expansion and collaborate with sales and delivery teams. Contribute to the development of AWS service offerings and go-to-market strategies. Mentorship & Continuous Improvement Mentor junior engineers and consultants, fostering a culture of learning and technical excellence. Stay current with AWS innovations and share knowledge across the team. Maintain and grow AWS certifications and contribute to internal best practices. Qualifications And Experience Certifications AWS Certified Solutions Architect Professional (required) AWS Certified DevOps Engineer / SysOps Administrator (preferred) Additional AWS Specialty certifications (e.g., Security, Networking, ML) are a plus Technical Skills 4+ years in AWS-focused engineering, consulting, or support roles Strong experience with: Compute (EC2, Lambda, ECS, EKS) Networking (VPC, Transit Gateway, Direct Connect) Storage (S3, EBS, Glacier) Databases (RDS, DynamoDB, Redshift) Monitoring (CloudWatch, Prometheus, DataDog) CI/CD (CodePipeline, Jenkins, GitHub Actions) Security and IAM best practices GenAI and ML services on AWS Soft Skills Excellent communication and stakeholder management skills Strong analytical and problem-solving abilities Customer-first mindset with a proactive approach to issue resolution Ability to lead cross-functional teams and manage technical risks Show more Show less

Posted 1 day ago

Apply

2.0 - 5.0 years

0 Lacs

mumbai, maharashtra, india

On-site

Inviting applications for the role of Senior Principal Consultant - ML Engineers! In this role, lead the automation and orchestration of our machine learning infrastructure and CI/CD pipelines on public cloud (preferably AWS). This role is essential for enabling scalable, secure, and reproducible deployments of both classical AI/ML models and Generative AI solutions in production environments. Responsibilities . Develop and maintain CI/CD pipelines for AI/GenAI models on AWS using GitHub Actions and CodePipeline. (Not Limited to) . Automate infrastructure provisioning using IAC. (Terraform, Bicep Etc) . Any cloud platform- Azure or AWS . Package and deploy AI/GenAI models on (SageMaker, Lambda, API Gateway). . Write Python scripts for automation, deployment, and monitoring. . Engaging in the design, development and maintenance of data pipelines for various AI use cases . Active contribution to key deliverables as part of an agile development team . Set up model monitoring, logging, and alerting (e.g., drift, latency, failures). . Ensure model governance, versioning, and traceability across environments. . Collaborating with others to source, analyse, test and deploy data processes . Experience in GenAI project Qualifications we seek in you! Minimum Qualifications experience with MLOps practices. . Degree/qualification in Computer Science or a related field, or equivalent work experience . Experience developing, testing, and deploying data pipelines Strong Python programming skills. Hands-on experience in deploying 2 - 3 AI/GenAI models in AWS. Familiarity with LLM APIs (e.g., OpenAI, Bedrock) and vector databases. . Clear and effective communication skills to interact with team members, stakeholders and end users Preferred Qualifications/ Skills . Experience with Docker-based deployments. . Exposure to model monitoring tools (Evidently, CloudWatch). . Familiarity with RAG stacks or fine-tuning LLMs. . Understanding of GitOps practices. . Knowledge of governance and compliance policies, standards, and procedures . . . . .

Posted 5 days ago

Apply

0.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Position Description Landing Zone: Support for Azure Landing Zone issues, including incident and service request handling Monitor Azure Management Groups, Policies, RBAC, Blueprints, and cost controls Perform compliance checks, remediate policy drifts, ensure tagging and naming standards Assist in onboarding new subscriptions and workloads using approved templates Implement and monitor Azure Policies, Initiatives, and Security Baselines Support automation using PowerShell, Azure CLI, Bicep/ARM templates Working knowledge of AWS Control Tower, Organizations, Account Factory AWS CLI, CloudFormation, Terraform, AWS Config, CloudWatch, IAM Experience managing IAM roles, policies, federated identities Basic experience with Git, CodePipeline, Jenkins, or other CI/CD tools Familiarity with AWS Cost Explorer, Budgets, and tagging enforcement Understanding of incident, change, and problem management practices Azure ARC: Support for Azure Arcenabled servers, Kubernetes clusters, SQL Servers, and custom locations Support onboarding of on-prem or multi-cloud resources into Azure Arc (agent installation, registration) Monitor Arc-enabled resource health via Azure Monitor, Log Analytics, and troubleshoot connectivity issues Apply and validate Azure Policy, Defender for Cloud recommendations, and compliance baselines to Arc resources Use PowerShell, Azure CLI, or automation tools to deploy Arc agents and manage configurations Patch Arc-enabled servers via Update Management or integration with third-party tools Maintain desired state configurations via Azure Automation or guest configuration policies Your future duties and responsibilities Required Qualifications To Be Successful In This Role Together, as owners, lets turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, youll reach your full potential because You are invited to be an owner from day 1 as we work together to bring our Dream to life. Thats why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our companys strategy and direction. Your work creates value. Youll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. Youll shape your career by joining a company built to grow and last. Youll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our teamone of the largest IT and business consulting services firms in the world. Show more Show less

Posted 6 days ago

Apply

0.0 - 4.0 years

0 Lacs

kolkata, west bengal

On-site

As a Server Systems Administrator, you will be responsible for installing, configuring, and maintaining server hardware and software systems to ensure optimal performance and availability. Your duties will include managing system backups, restorations, and disaster recovery processes, while providing user support and training. You will also develop and manage automation scripts and system tools, overseeing capacity planning to support business servers. Furthermore, you will be expected to integrate essential software, troubleshoot issues across multiple technologies, and provide backup support for business servers. Your role will involve evaluating system requirements, supporting design and development activities, and administering advanced processes. Additionally, you will build and maintain infrastructure to meet business requirements, executing system troubleshooting as needed. Monitoring daily system operations, ensuring optimal server resource availability, and managing Linux server tasks will be essential aspects of your job. You will also assist with virtual machine setup, deployment, configuration, and backups, as well as provide technical support for installation, patching, configuration, and infrastructure updates. It will be your responsibility to maintain and monitor patch releases to ensure systems remain compliant with NISI standards, optimize resource performance, and provide application support to ensure high levels of customer service. To excel in this role, you should have a basic knowledge of system administration principles, standards, and procedures. Experience with AWS services such as CloudFormation, Infrastructure as Code (IaC), OpsWorks, CodeDeploy, CodePipeline, and CodeCommit will be advantageous. Strong familiarity with CLI and API technologies, as well as hands-on experience with Azure and AWS platform development and deployment, is required. Expertise in designing, developing, and managing AWS-based solutions is essential, along with the ability to manage proof-of-concepts and evaluate cloud computing models (Public, Private, IaaS, PaaS, SaaS). This is a full-time, permanent position suitable for fresher candidates. The work location is in person.,

Posted 6 days ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

As a Data Engineer, you will be responsible for developing and enhancing data-processing, orchestration, monitoring, and more by leveraging popular open-source software, AWS, and GitLab automation. You will collaborate with product and technology teams to design and validate the capabilities of the data platform. Additionally, you will identify, design, and implement process improvements including automating manual processes, optimizing for usability, and re-designing for greater scalability. Providing technical support and usage guidance to the users of our platforms services will also be a key part of your role. You will drive the creation and refinement of metrics, monitoring, and alerting mechanisms to give us the visibility we need into our production services. To qualify for this position, you should have experience building and optimizing data pipelines in a distributed environment, supporting and working with cross-functional teams, and proficiency working in a Linux environment. A minimum of 5 years of advanced working knowledge of SQL, Python, and PySpark is required. Knowledge of Palantir and experience using tools such as Git/Bitbucket, Jenkins/CodeBuild, CodePipeline, and platform monitoring and alerts tools will be beneficial for this role.,

Posted 1 week ago

Apply

5.0 - 8.0 years

14 - 16 Lacs

bengaluru, karnataka, india

On-site

We are looking for a highly motivated SDET to join our team and ensure the quality and reliability of applications that incorporate AI/ML components . This role demands strong skills in automated testing, using scripting like Python, TypeScript , and hands-on experience with cloud infrastructure (preferably AWS) . Exposure with solutions built using AI/ML/GenAI is a key advantage. Duties & Responsibilities Design and implement automated test suites for web and backend systems using any scripting language, preferably python or typescript. Expert in developing AI/ML solutions using python Work closely with development, ML, and DevOps teams to test features powered by AI/ML models. Validate ML model predictions, data inputs/outputs, and model integration with application logic. Build and maintain test infrastructure on AWS, including test environments, CI/CD pipelines, and test execution in cloud-based containers. Participate in code reviews and pair programming for quality gatekeeping. Skills Required 58 years of experience in a Development or QE Automation or SDET role. Strong hands-on experience using AI/ML models and python or typescript for test automation. Strong experience in prompt engineering Experience working in AWS environments, including EC2, S3, Lambda, or CloudWatch. Experience working with AWS Bedrock, Claud LLM or SLMs Familiarity with AI/ML model integration and understanding of the basics of model behavior, metrics, and quality concerns. Experience with REST API testing, JSON, and HTTP protocols. Working knowledge of CI/CD pipelines (e.g., GitHub Actions, Jenkins, CodePipeline). Understanding of software testing principles and test-driven development. Excellent communication and collaboration skills. Show more Show less

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

As a skilled professional, you have hands-on experience in developing and maintaining JAVA-based applications. You are proficient in application development using Java and AWS microservices, with a solid understanding of Microservices architecture. Your expertise includes working with AWS services such as ECS, ELB, S3, CloudWatch, AppMesh, AWS Codebuild, and Codepipeline. Your excellent written and verbal communication skills enable you to effectively collaborate with team members and stakeholders. Additionally, you have experience in safe agile development practices, ensuring efficient and high-quality project delivery.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 - 0 Lacs

maharashtra

On-site

The candidate should have hands-on experience in developing and managing AWS infrastructure and DevOps setup, along with the ability to delegate and distribute tasks effectively within the team. Responsibilities include deploying, automating, maintaining, and managing an AWS production system, ensuring reliability, security, and scalability. The role involves resolving problems across multiple application domains and platforms using system troubleshooting and problem-solving techniques. Additionally, the candidate will be responsible for automating different operational processes by designing, maintaining, and managing tools, providing primary operational support and engineering for cloud issues and application deployments, and leading the organization's platform security efforts by collaborating with the engineering team. They will also need to maintain/improve existing policies, standards, and guidelines for IAC and CI/CD that teams can follow, work closely with Product Owner and development teams for continuous improvement, and analyze and troubleshoot infrastructure issues while developing tools/systems for task automation. The required tech stack to handle daily operation workload and managing the team includes Cloud: AWS and AWS Services such as CloudFront, S3, EMR, VPC, VPN, EKS, EC2, CloudWatch, Kinesis, RedShift, Organization, IAM, Lambda, Kinesis, Code Commit, CodeBuild, ECR, CodePipeline, Secret Manager, SNS, Route53, as well as DevOps tools like SonarQube, FluxCD, Terraform, Prisma Cloud, Kong, and site 24*7. This position may require occasional travel to support cloud initiatives and attend conferences or training sessions. The role typically involves working various shifts to support customers in a 24/7 roster-based model within an office environment.,

Posted 2 weeks ago

Apply

12.0 - 16.0 years

0 Lacs

kochi, kerala

On-site

As a Technical Lead at Cavli Wireless, you will be responsible for leading the design, development, and deployment of scalable cloud-based solutions. You will collaborate with cross-functional teams to ensure the seamless integration of cloud technologies in support of our IoT products and services. Your key responsibilities will include spearheading the design and implementation of cloud infrastructure and application architectures, ensuring they are scalable, secure, and highly available. In this role, you will provide technical leadership by offering guidance and mentorship to development teams, fostering a culture of continuous improvement and adherence to best practices. You will conduct thorough code reviews, debugging sessions, and knowledge-sharing initiatives to maintain high-quality code standards. Additionally, you will collaborate with stakeholders to gather and translate business requirements into technical specifications and actionable tasks. As a Technical Lead, you will define project scope, timelines, and deliverables in coordination with stakeholders to ensure alignment with business objectives. You will advocate for and implement industry best practices in coding, testing, and deployment processes while utilizing code versioning tools like GitHub to manage and track code changes effectively. The ideal candidate for this role should possess expertise in Angular framework for frontend technologies and proficiency in Node.js for backend technologies. Strong command over TypeScript and JavaScript, along with a working knowledge of Python, is required. Extensive experience with AWS services such as EC2, Lambda, S3, RDS, VPC, IoT Core, API Gateway, DynamoDB, and proficiency in using DevOps tools like CodePipeline, CodeDeploy, and CloudFormation are essential. A Bachelor's degree in Computer Science, Information Technology, or a related field (B.Tech/MCA) is required for this position. As a leader, you are expected to lead by example, demonstrating technical excellence and a proactive approach to problem-solving. You will mentor junior developers, provide guidance on technical challenges and career development, and foster a collaborative and inclusive team environment by encouraging open communication and knowledge sharing.,

Posted 2 weeks ago

Apply

10.0 - 12.0 years

28 Lacs

hyderabad, telangana, india

On-site

Job Description Lead the development team to deliver on budget, high value complex projects. Drive the technical direction of a team, project or product area. Take technical responsibility for all stages and/or iterations in a software development project, providing method specific technical advice to project stakeholders. Specify and ensure the design and development of technology solutions properly fulfills all our requirements, achieve desired objectives and fulfill return on investment goals. Lead the development team to ensure disciplines are followed, project schedules, risks and issues are managed, and project stakeholders receive regular communications. Establish a successful team culture, helping team members grow their skillsets and careers. Actively contribute to the team's productivity by engaging in hands-on coding and delivering high-quality, maintainable code as part of ongoing development efforts. Qualifications You should have 10+ years of working experience in a software development environment of which the last 5 years being in a team leader position. Experience with cloud development on the Amazon Web Services (AWS) platform with services including Lambda, EC2, S3, Glue, Kubernetes, Fargate, AWS Batch and Aurora DB. Ability to comprehend and implement detailed project specifications and to multiple technologies and simultaneously work on multiple projects. Proficiency in developing mobile applications for both Android and iOS platforms using Kotlin and Swift. Proficiency in Java full stack development, including Springboot Framework, Kafka. Experience with Continuous Integration/Continuous Delivery (CI/CD) processes and practices (CodeCommit, CodeDeploy, CodePipeline/Harness/Jenkins/GitHub Actions, CLI, BitBucket/Git, etc.). Ability to mentor and motivate team members. You will be reporting to a Director Additional Information Our uniqueness is that we truly celebrate yours. Experian's culture and people are important differentiators. We take our people agenda very seriously and focus on what truly matters DEI, work/life balance, development, authenticity, engagement, collaboration, wellness, reward & recognition, volunteering... the list goes on. Experian's strong people first approach is award winning Great Place To Work in 24 countries, FORTUNE Best Companies to work and Glassdoor Best Places to Work (globally 4.4 Stars) to name a few. Check out Experian Life on social or our Careers Site to understand why. Experian is proud to be an Equal Opportunity and Affirmative Action employer. Innovation is a critical part of Experian's DNA and practices, and our diverse workforce drives our success. Everyone can succeed at Experian and bring their whole self to work, irrespective of their gender, ethnicity, religion, color, sexuality, physical ability or age. If you have a disability or special need that requires accommodation, please let us know at the earliest opportunity. Experian Careers - Creating a better tomorrow together Benefits Experian care for employee's work life balance, health, safety and wellbeing. In support of this endeavor, we offer best-in-class family well-being benefits, enhanced medical benefits and paid time off. #LI-Onsite Experian Careers - Creating a better tomorrow together

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

hyderabad, telangana

On-site

As a Senior Data Engineer with over 6 years of experience, you will be responsible for developing and enhancing data-processing, orchestration, monitoring, and more by utilizing popular open-source software, AWS, and GitLab automation. You will collaborate closely with product and technology teams to design and validate the capabilities of the data platform. Your role will involve identifying, designing, and implementing process improvements, such as automating manual processes, optimizing for usability, and re-designing for greater scalability. Additionally, you will provide technical support and usage guidance to the users of our platform services. Your key responsibilities will also include driving the creation and refinement of metrics, monitoring, and alerting mechanisms to provide the necessary visibility into our production services. To excel in this role, you should have experience in building and optimizing data pipelines in a distributed environment, as well as supporting and working with cross-functional teams. Proficiency in working in a Linux environment is essential, along with 4+ years of advanced working knowledge of SQL, Python, and PySpark. Knowledge of Palantir is preferred. Moreover, you should have experience using tools such as Git/Bitbucket, Jenkins/CodeBuild, and CodePipeline. Experience with platform monitoring and alert tools will be beneficial in fulfilling the requirements of this role. If you are ready to take on this challenging position and meet the qualifications mentioned above, please share your resume with Sunny Tiwari at stiwari@enexusglobal.com. We look forward to potentially having you as part of our team. Sunny Tiwari 510-925-0380,

Posted 2 weeks ago

Apply

0.0 years

0 Lacs

gurugram, haryana, india

On-site

Inviting applications for the role of Assistant Manager - Data Engineer - AWS, Python, UI and Web Engineer-Agentic AI! In this role, you%27ll be part of Genpact%27s transformation under GenpactNext , as we lead the shift to Agentic AI Solutions-domain-specific, autonomous systems that redefine how we deliver value to clients. You%27ll help drive the adoption of innovations like the Genpact AP Suite in finance and accounting, with more Agentic AI products set to expand across service lines. Responsibilities . Design and develop scalable RESTful APIs using Node.js and Python , enabling seamless data access and integration across distributed systems. . Build and manage backend services that interact with SQL databases and cloud storage (e.g., Amazon S3 , RDS , Redshift ) to support analytics and application workflows. . Develop serverless applications and automation scripts using AWS Lambda , Python , and Node.js , orchestrated via AWS Step Functions and integrated with AWS services. . Implement Infrastructure as Code ( IaC ) using AWS CloudFormation to provision and manage resources such as EC2 , Lambda , and VPCs for scalable, secure environments. . Refactor and migrate legacy systems to cloud-native architectures on AWS , leveraging services like EC2 , DynamoDB , Lambda , and RDS for improved scalability and maintainability. . Write efficient and reusable SQL queries to support application logic and reporting needs, ensuring high performance through indexing and query optimization. . Monitor and optimize API and backend service performance, using logging, profiling tools, and cloud-native monitoring (e.g., CloudWatch , X-Ray ) to ensure reliability and uptime. . Apply authentication and authorization best practices using IAM roles , API Gateway , and JWTs to secure API endpoints and data access. . Participate in CI/CD workflows using Git , CodePipeline , and CodeDeploy to automate testing, deployment, and version control for Node.js and Python services. . Collaborate in architectural reviews and sprint planning, translating business requirements into scalable, cloud-based backend solutions. . Ensure robustness and resilience of systems through backup strategies, disaster recovery planning, and failover mechanisms using AWS-native features. . Lead and contribute to unit testing , integration testing , and code reviews to maintain high code quality and functional integrity across backend services and APIs. Qualifications we seek in you! Minimum Qualifications Experience in designing, implementing data pipelines, build data applications, data migration on AWS Strong experience of implementing data lake using AWS services like Glue, Lambda, Step, Redshift Strong experience in Node JS, Python and SQL Proven expertise in AWS services such as S3, Lambda, Glue, EMR, and Redshift. Advanced programming skills in Python for data processing and automation. Hands-on experience with Apache Spark for large-scale data processing. Proficiency in SQL for data querying and transformation. Strong understanding of security principles and best practices for cloud-based environments. Experience with monitoring tools and implementing proactive measures to ensure system availability and performance. Excellent problem-solving skills and ability to troubleshoot complex issues in a distributed, cloud-based environment. Strong communication and collaboration skills to work effectively with cross-functional teams. Preferred Qualifications/ Skills Bachelor's/ Master's Degree -Computer Science, Electronics, Electrical or equivalent AWS Data Engineering & Cloud certifications Experience with multiple data integration technologies and cloud platforms Knowledge of Change & Incident Management process

Posted 2 weeks ago

Apply

2.0 - 6.0 years

1 - 4 Lacs

hyderabad, bengaluru, delhi / ncr

Work from Office

Work Timings: 2 PM to 11 PM Job Description AWS Services: Lambda, API Gateway, S3, DynamoDB, Step Functions, SQS, AppSync, CloudWatch Logs, X-Ray, EventBridge, Amazon Pinpoint, Cognito, KMS, API Gateway, App Sync Infrastructure as Code (IaC): AWS CDK, CodePipeline (planned) Serverless Architecture & Event-Driven Design Cloud Monitoring & Observability: CloudWatch Logs, X-Ray, Custom Metrics Security & Compliance: IAM roles boundaries, PHI/PII tagging, Cognito, KMS, HIPAA standards, Isolation Pattern, Access Control Cost Optimization: S3 lifecycle policies, serverless tiers, service selection (e.g., Pinpoint vs SES) Scalability & Resilience: Auto-scaling, DLQs, retry/backoff, circuit breakers CI/CD Pipeline Concepts Documentation & Workflow Design Cross-Functional Collaboration and AWS Best Practices Skills: x-ray,cloud monitoring,aws,ci/cd pipeline,documentation,security & compliance,cost optimization,aws services,infrastructure,infrastructure as code (iac),cross-functional collaboration,observability,serverless architecture,api,event-driven design,scalability,workflow design,aws best practices,resilience Location: Remote- Bengaluru,Hyderabad,Delhi / NCR,Chennai,Pune,Kolkata,Ahmedabad,Mumbai

Posted 2 weeks ago

Apply

12.0 - 16.0 years

0 Lacs

karnataka

On-site

The Network Engineering and Planning - Senior Lead plays a critical role in building and leading high performing teams ensuring the Telstra network delivers a superior network experience for our consumer and business customers. You combine your extensive telecommunications network expertise with your strong people skills to lead and develop your team so they can deliver a great customer experience. Drawing on your deep levels of telecommunications network expertise and experience, you provide specialist analysis, design, development, and network support in network specific technology domains, including developing platform specific technology roadmaps and making technology investment recommendations. As a Senior Software Engineer, you lead specialist analysis, design, development, and deployment in network specific technology domains, including developing platform specific technology roadmaps and making technology investment recommendations to deliver products and infrastructure to defined service level standards. This Senior Developer is a critical contributor in ensuring that the Telstra network delivers a superior network experience for our consumer and business customers. The measurable deliverables and outcomes of this role will be the design and delivery of quality Engineering solutions for Telstra's Strategic Inventory and autonomous network. This role will need 70% of individual contribution and 30% of People management. Your role contributes to the business strategy by effectively and efficiently supporting a diverse set of technical projects with specific outcomes via programs of work in Technology delivery as well as providing expert input and recommendations regarding future-looking technical projects. The measurable deliverables and outcomes of this role will be the design and delivery of quality Engineering solutions for Telstra's Strategic Inventory and the futuristic autonomous network. Key Accountabilities: - Design, build and maintain applications/tools to manage and expose domain resources and services. - Lead the development of data transformation framework and toolsets on Network Inventory data models to improve accuracy and data quality. - Actively keep across changes in TMF standards and API specifications especially around Resource and Service inventory best practices. - Actively participate in designs and architecture developments to meet the business requirements. - Participate in technology trials and proof of concept initiatives for automation & orchestration platforms within Fixed Networks. - Perform tasks from detailed design to solution development when required. - Provide a strategic view on deployment and implementations tasks and setting up a development framework. - Develop software solutions to capture performance monitoring and customer experience monitoring. - Focus on continuing professional learning and development related to technology and methodology that benefits the individual as well as effectiveness in the role. - Authentically engage with a multi-stakeholder landscape to translate customer needs into leading-edge software application products, services or systems that meet Telstra's time, cost, and quality standards. - Collaborate with team members and key stakeholders, seeking support, direction and buy-in to gather deep insights about the challenges and opportunities of our software application technologies and platforms. - Significantly contribute to continuous improvement initiatives of our systems and processes, to help define "best practice" corporate testing standards and drive improved outcomes (e.g. productivity, customer experience and/or profitability) - Focus on continuing professional learning and development related to technology and methodology that benefits the individual as well as effectiveness in the role. - Work using Agile methodologies in line with Telstra's strategic direction for increasing efficiency and maximizing workforce utility. Key Activities for this role: - Identify and develop requirements for CRUD functions on the Workflow automation platform. - Design and Development of UI and Workflows for CRUD functions on Telstra's Inventory (physical, logical and service layers). - Development of workflow process automation and integration of various systems by leveraging BPMN compliant approaches. - Development of UI / UX leveraging React framework - Engage various workgroups to gather detailed capability requirement. - Scoping and design of Complex Workflow functions on strategic inventory. - Coordinate the development & delivery of CRUD functions. - Developing a high performing cohort of engineers with network application development skillsets. Essential Technical Skills: - Minimum 12 years of overall experience - Experience in People management - Experience in leading a team of junior developers - Proficient in Docker. - Proficient with Camunda. - Proficient in microservices design and development - Proficient with TMF compliant API design & development (Minimum 4 years of hands-on experience) - gRPC event triggered automation development. - Experience working with React framework - Workflow automation process development - Hands-on experience in developing Workflow automation capabilities. - Database experience with NoSQL or RDBMS and proficiency writing complex SQL queries. - Can implement CI/CD pipelines using tools such as Bamboo, GIT (BitBucket), CodePipeline, etc. - Experience with Telstra's New Ways of Working methodology, practices, and mindset: - Agile, DevOps, Lean, Human-Centered Design - Jira, Confluence - Advanced oral and written communication skills that can adapt to different teams, individuals, and personality types as well as able to author, review and evaluate technical and non-technical documentation. - Bachelor's degree (or higher) in Engineering/ Computer Science / Information Technology / Telecommunications Highly desirable skills: - Experience in React.js or Angular.js - Advanced programming logic - Experience in Network management tools and systems - Scripting skills in Python, Java or other programming language. Potential Career pathway: Principal Chapter Lead or Principal Architect Role,

Posted 3 weeks ago

Apply

2.0 - 5.0 years

0 Lacs

kolkata, west bengal, india

On-site

Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Principal Consultant - MLOps Engineer! In this role, lead the automation and orchestration of our machine learning infrastructure and CI/CD pipelines on public cloud (preferably AWS). This role is essential for enabling scalable, secure, and reproducible deployments of both classical AI/ML models and Generative AI solutions in production environments. Responsibilities Develop and maintain CI/CD pipelines for AI/ GenAI models on AWS using GitHub Actions and CodePipeline . (Not Limited to) Automate infrastructure provisioning using IAC. (Terraform, Bicep Etc) Any cloud platform - Azure or AWS Package and deploy AI/ GenAI models on (SageMaker, Lambda, API Gateway). Write Python scripts for automation, deployment, and monitoring. Engaging in the design, development and maintenance of data pipelines for various AI use cases Active contribution to key deliverables as part of an agile development team Set up model monitoring, logging, and alerting (e.g., drift, latency, failures). Ensure model governance, versioning, and traceability across environments. Collaborating with others to source, analyse , test and deploy data processes Experience in GenAI project Qualifications we seek in you! Minimum Qualifications experience with MLOps practices. Degree/qualification in Computer Science or a related field, or equivalent work experience Experience developing, testing, and deploying data pipelines Strong Python programming skills. Hands-on experience in deploying 2 - 3 AI/ GenAI models in AWS. Familiarity with LLM APIs (e.g., OpenAI, Bedrock) and vector databases. Clear and effective communication skills to interact with team members, stakeholders and end users Preferred Qualifications/ Skills Experience with Docker-based deployments. Exposure to model monitoring tools (Evidently, CloudWatch). Familiarity with RAG stacks or fine-tuning LLMs. Understanding of GitOps practices. Knowledge of governance and compliance policies, standards, and procedures Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 3 weeks ago

Apply

0.0 years

0 Lacs

bengaluru, karnataka, india

Remote

Req ID: 337682 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Cloud Engineer (AWS) to join our team in Bangalore, Karn?taka (IN-KA), India (IN). Job Duties: Design and implement AWS infrastructure to support ETL and database migration workloads. Set up and optimize AWS Glue, Amazon RDS (PostgreSQL), and related services. Ensure secure, scalable, and cost-effective architecture for data migration and processing. Collaborate with Data Engineers to ensure smooth integration between ETL tools and AWS infrastructure. Automate deployments using AWS CloudFormation/Terraform. Integrate AWS workloads with existing Unix systems. Implement monitoring, logging, and alerting solutions (CloudWatch, AWS Config). Minimum Skills Required: Proven experience with AWS (Glue, RDS PostgreSQL, S3, IAM, CloudFormation/Terraform). Strong Unix/Linux administration skills. Experience with networking, IAM policies, and security best practices in AWS. Familiarity with ETL pipelines and data migration processes. Experience with CI/CD pipelines (CodePipeline, Jenkins, or similar). About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com Whenever possible, we hire locally to NTT DATA offices or client sites. This ensures we can provide timely and effective support tailored to each clients needs. While many positions offer remote or hybrid work options, these arrangements are subject to change based on client requirements. For employees near an NTT DATA office or client site, in-office attendance may be required for meetings or events, depending on business needs. At NTT DATA, we are committed to staying flexible and meeting the evolving needs of both our clients and employees. NTT DATA recruiters will never ask for payment or banking information and will only use @nttdata.com and @talent.nttdataservices.com email addresses. If you are requested to provide payment or disclose banking information, please submit a contact us form, https://us.nttdata.com/en/contact-us . NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here . If you&aposd like more information on your EEO rights under the law, please click here . For Pay Transparency information, please click here . Show more Show less

Posted 3 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

hyderabad, telangana, india

On-site

* Year of Experience: 6 - 10 years * Location: Chennai/Coimbatore/Bangalore/Hyderabad Requirement: 1. Cloud: (Mandatory): Proven technical experience with AWS, scripting, and automation * Hands-on knowledge on services and implementation such as Landing Zone, Control Tower, Transit Gateway, CloudFront, IAM, VPC, EC2, S3, Lambda, Load Balancers, Auto Scaling, etc. * Experience in scripting languages such as Python, Bash, Ruby, Groovy, Java, JavaScript 2. Automation (Mandatory): Hands-on experience with Infrastructure as Code automation (IaC) and Configuration Management tools such as: * Terraform, CloudFormation, Azure ARM, Bicep, Ansible, Chef, or Puppet 3. CI/CD (Mandatory): Hands-on experience in setting up or developing CI/CD pipelines using any of the tools such as (Not Limited To): * Jenkins, AWS CodeCommit, CodeBuild, CodePipeline, CodeDeploy, GitLab CI, Azure DevOps 4. Containers & Orchestration (Good to have): Hands-on experience in provisioning and managing containers and orchestration solutions such as: * Docker & Docker Swarm * Kubernetes (PrivatePublic Cloud platforms) * OpenShift * Helm Charts Certification Expectations 1. Cloud: Certification (Mandatory, any of): * AWS Certified SysOps Administrator Associate * AWS Certified Solutions Architect Associate * AWS Certified Developer Associate * Any AWS Professional/Specialty certification(s) 2. Automation: (Optional, any of): * RedHat Certified Specialist in Ansible Automation * HashiCorp Terraform Certified Associate 3. CI-CD: (Optional) * Certified Jenkins Engineer 4. Containers & Orchestration (Optional, any of): * CKA (Certified Kubernetes Administrator) * RedHat Certified Specialist in OpenShift Administration Responsibilities: * Lead architecture and design discussions with architects and clients. * Understanding of technology best practices and AWS frameworks such as Well- Architected Framework * Implementing solutions with an emphasis on Cloud Security, Cost Optimization, and automation * Independently handle customer engagements and new deals. * Ability to manage teams and derive results. * Ability to initiate proactive meetings with Leads and extended teams to highlight any gaps/delays or other challenges. * Subject Matter Expert in technology. * Ability to trainmentor the team in functional and technical skills. * Ability to decide and provide adequate help on the career progression of people. * Handle assets development * Support to the application team Work with application development teams to design, implement and where necessary, automate infrastructure on cloud platforms * Continuous improvement - Certain engagements will require you to support and maintain existing cloud environments with an emphasis on continuously innovating through automation and enhancing stability/availability through monitoring and improving the security posture Show more Show less

Posted 4 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

punjab

On-site

As a Software Engineer specializing in ML applications, you will be responsible for leveraging your expertise in Ansible, Python, and Unix to develop and maintain efficient and scalable solutions. Your experience in roles such as Software Engineer, DevOps, SRE, or System Engineer in a Cloud environment will be crucial in ensuring the smooth operation of ML applications. Your practical working experience in AWS using Native Tools will be essential for the successful implementation of ML projects. Proficiency in container technology, including Docker and Kubernetes/EKS, will enable you to design robust and resilient ML solutions. Additionally, your programming skills in Go, Python, Typescript, and Bash will be valuable assets in crafting effective applications. Your familiarity with Infrastructure as Code tools such as Cloudformation, CDK, and Terraform will be vital in automating and managing the infrastructure required for ML applications. Experience with CI/CD tools like Jenkins, BitBucket, Bamboo CodeBuild, CodePipeline, Docker Registry, and ArgoCD will be necessary for implementing efficient deployment pipelines. An understanding of DevOps and SRE best practices, including Logging, Monitoring, and Security, will ensure that the ML applications you develop are reliable, secure, and performant. Strong communication skills will be essential for effectively conveying your ideas and collaborating with team members to achieve project goals.,

Posted 4 weeks ago

Apply

8.0 - 10.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Overview Working at Atlassian Atlassians can choose where they work whether in an office, from home, or a combination of the two. That way, Atlassians have more control over supporting their family, personal goals, and other priorities. We can hire people in any country where we have a legal entity. Interviews and onboarding are conducted virtually, a part of being a distributed-first company. Our office is in Bengaluru, but we offer flexibility for eligible candidates to work remotely across India. Whatever your preferenceworking from home, an office, or in between you can choose the place that&aposs best for your work and your lifestyle. Responsibilities As a Principal Software Engineer, you will be a technical leader and hands-on contributor, designing and optimizing high-scale, distributed storage systems. You will play a pivotal role in shaping the architecture, performance, and reliability of backend storage solutions that power critical applications at scale. Your primary responsibilities will include designing, implementing, and optimizing backend storage services that support high throughput, low latency, and fault tolerance. You will work closely with senior engineers, architects, and cross-functional teams to drive scalability, availability, and efficiency improvements in large-scale storage solutions. You will also lead technical deep dives, architecture reviews, and root cause analyses to resolve complex production issues related to storage performance, consistency, and durability. As a thought leader, you will drive best practices in distributed system design, security, and cloud cost optimization. You will also mentor senior engineers, contribute to technical roadmaps, and help shape the long-term storage strategy. Your expertise in storage consistency models, data partitioning, indexing, and caching strategies will be instrumental in improving system performance and reliability. Additionally, you will collaborate with Site Reliability Engineers (SREs) to implement management interfaces, observability and monitoring, ensuring high availability and compliance with industry standards. You will advocate for automation, Infrastructure-as-Code (IaC), and DevOps best practices, Kubernetes (EKS), and CI/CD pipelines to enable scalable deployments and operational excellence. Qualifications Basic Requirements Bachelors or Masters degree in Computer Science, Software Engineering, or a related technical field. 8+ years of experience in backend software development, focusing on distributed systems and storage solutions. 5+ years of experience working with AWS relational database services (RDS and Aurora) or equivalent in GCP. Strong expertise in system design, architecture, and scalability for large-scale storage solutions. Proficiency in at least one major backend programming language (Kotlin, Java, Go, Rust, or Python). Experience designing and implementing highly available, fault-tolerant, and cost-efficient storage architectures. Deep understanding of distributed systems, replication strategies, backup, restore, sharding, and caching. Knowledge of data security, encryption best practices, and compliance requirements (SOC2, GDPR, HIPAA). Experience leading engineering teams, mentoring senior engineers, and driving technical roadmaps. Proficiency with observability tools, performance monitoring, and troubleshooting at scale. Core Requirements Expertise in Large-Scale Storage Systems Deep knowledge of AWS relational database services (RDS and Aurora) or equivalent in GCP and their performance characteristics. Strong understanding of storage durability, consistency models, replication, and fault tolerance. Experience implementing cost-optimized data retention strategies. Distributed Systems & Scalability Deep understanding of distributed storage architectures, CAP theorem, and consistency models. Expertise in partitioning, sharding, and replication strategies for low-latency, high-throughput storage. Experience designing and implementing highly available, fault-tolerant distributed systems using consensus algorithms (Raft / Paxos). Hands-on experience with Postgres. High-Performance Backend Engineering Strong programming skills in Kotlin, Java, Go, Rust, or Python for backend storage development. Experience building event-driven, microservices-based architectures using gRPC, REST, or WebSockets. Expertise in data serialization formats (Parquet, Avro, ORC) for optimized storage access. Experience implementing data compression, deduplication, and indexing strategies to improve storage efficiency. Cloud-Native & Infrastructure Automation Strong hands-on experience with cloud storage best practices. Proficiency in Infrastructure as Code (IaC) using Terraform, AWS CDK, or CloudFormation. Experience with Kubernetes (EKS), serverless architectures (Lambda, Fargate), and containerized storage workloads. Expertise in CI/CD automation for storage services, leveraging GitHub Actions, CodePipeline, Jenkins, or ArgoCD. Performance Optimization & Observability Experience with benchmarking, profiling, and optimizing storage workloads. Proficiency in performance monitoring tools (CloudWatch, Prometheus, OpenTelemetry, Grafana) for storage systems. Strong debugging and troubleshooting skills for latency bottlenecks, memory leaks, and concurrency issues. Experience designing observability strategies (tracing, metrics, structured logging) for large-scale storage systems. Security, Compliance, and Data Protection Deep knowledge of data security, encryption at rest/in transit, and IAM policies in AWS or equivalent in GCP. Experience implementing fine-grained access controls (IAM, KMS, STS, VPC Security Groups) for multi-tenant storage solutions. Familiarity with compliance frameworks (SOC2, GDPR, HIPAA, FedRAMP) and best practices for secure data storage. Expertise in disaster recovery, backup strategies, and multi-region failover solutions. Leadership & Architectural Strategy Proven ability to design, document, and drive large-scale storage architectures from concept to production. Experience leading technical design reviews, architecture discussions, and engineering best practices. Strong ability to mentor senior and mid-level engineers, fostering growth in distributed storage expertise. Ability to influence technical roadmaps, long-term vision, and cost optimization strategies for backend storage. Our Perks & Benefits Atlassian offers a variety of perks and benefits to support you, your family and to help you engage with your local community. Our offerings include health coverage, paid volunteer days, wellness resources, and so much more. Visit go.atlassian.com/perksandbenefits to learn more. About Atlassian At Atlassian, we&aposre motivated by a common goal: to unleash the potential of every team. Our software products help teams all over the planet and our solutions are designed for all types of work. Team collaboration through our tools makes what may be impossible alone, possible together. We believe that the unique contributions of all Atlassians create our success. To ensure that our products and culture continue to incorporate everyone&aposs perspectives and experience, we never discriminate based on race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status. All your information will be kept confidential according to EEO guidelines. To provide you the best experience, we can support with accommodations or adjustments at any stage of the recruitment process. Simply inform our Recruitment team during your conversation with them. To learn more about our culture and hiring process, visit go.atlassian.com/crh . Show more Show less

Posted 1 month ago

Apply

4.0 - 8.0 years

0 Lacs

noida, uttar pradesh

On-site

As a Solution Architect in the Pre-Sales department, with 4-6 years of experience in cloud infrastructure deployment, migration, and managed services, your primary responsibility will be to design AWS Cloud Professional Services and AWS Cloud Managed Services solutions tailored to meet customer needs and requirements. You will engage with customers to analyze their requirements, ensuring cost-effective and technically sound solutions are provided. Your role will also involve developing technical and commercial proposals in response to various client inquiries such as Requests for Information (RFI), Requests for Quotation (RFQ), and Requests for Proposal (RFP). Additionally, you will prepare and deliver technical presentations to clients, highlighting the value and capabilities of AWS solutions. Collaborating closely with the sales team, you will work towards supporting their objectives and closing deals that align with business needs. Your ability to apply creative and analytical problem-solving skills to address complex challenges using AWS technology will be crucial. Furthermore, you should possess hands-on experience in planning, designing, and implementing AWS IaaS, PaaS, and SaaS services. Experience in executing end-to-end cloud migrations to AWS, including discovery, assessment, and implementation, is required. You must also be proficient in designing and deploying well-architected landing zones and disaster recovery environments on AWS. Excellent communication skills, both written and verbal, are essential for effectively articulating solutions to technical and non-technical stakeholders. Your organizational, time management, problem-solving, and analytical skills will play a vital role in driving consistent business performance and exceeding targets. Desirable skills include intermediate-level experience with AWS services like AppStream, Elastic BeanStalk, ECS, Elasticache, and more, as well as IT orchestration and automation tools such as Ansible, Puppet, and Chef. Knowledge of Terraform, Azure DevOps, and AWS development services will be advantageous. In this role based in Noida, Uttar Pradesh, India, you will have the opportunity to collaborate with technical and non-technical teams across the organization, ensuring scalable, efficient, and secure solutions are delivered on the AWS platform.,

Posted 1 month ago

Apply

0.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

At Roche you can show up as yourself, embraced for the unique qualities you bring. Our culture encourages personal expression, open dialogue, and genuine connections, where you are valued, accepted and respected for who you are, allowing you to thrive both personally and professionally. This is how we aim to prevent, stop and cure diseases and ensure everyone has access to healthcare today and for generations to come. Join Roche, where every voice matters. The Position Title: Senior Frontend Developer (ReactJS & AWS) The Opportunity We seek a skilled front-end developer with expertise in ReactJS, AWS, and CI/CD best practices to design, develop, and maintain high-performance, scalable, and reliable web applications using modern cloud-native technologies. Role Description: The person will build and optimize high-performance frontend applications using ReactJS, integrated with AWS services to support the applications requirements. Additionally, the role involves setting up and managing CI/CD pipelines on GitLab, collaborating with our cross-functional, internal teams, and contributing to the overall user experience. Design, develop, and maintain high-performance, scalable, and reliable web applications using ReactJS and next.js Collaborate seamlessly with cross-functional teams (product management, design, engineering, data ops) to deliver exceptional user experiences Work closely with UX/UI designers to translate design systems, mockups or wireframes into functional, clean, and responsive code Implement robust and secure cloud-native architectures, leveraging AWS as our primary platform Optimize web applications for peak performance, scalability, and cost-efficiency Stay abreast of the latest trends and best practices in cloud-native development and web technologies Contribute to the development of reusable components and libraries to streamline development processes Who You Are Programming & Web Development: Proficiency in JavaScript, TypeScript, Python, React, Next.js, JavaScript (ES6+), HTML5, and CSS3 for modern web development Cloud & Serverless Expertise: In-depth knowledge of cloud-native technologies, especially AWS, serverless frameworks (AWS Lambda or Google Cloud Functions), and AWS services like CodePipeline, CodeBuild, and CodeDeploy Containerization & Microservices: Familiarity with Docker, Kubernetes, serverless computing, and microservices architectures for scalable applications API & Database Proficiency: Experience with RESTful APIs, GraphQL, SQL, API design principles, and managing cloud security, including IAM and data encryption DevOps Practices: Expertise in CI/CD pipelines, infrastructure as code, and DevOps tools for automation and efficient development workflows Problem-Solving & Collaboration: Strong troubleshooting and debugging capabilities combined with excellent interpersonal and communication skills Industry Knowledge & Certifications: Experience in regulated industries (e.g., pharmaceuticals, finance) with a focus on data compliance, security, and AWS certifications In exchange we provide you with Development opportunities: Roche is rich in learning resources. We provide constant development opportunities, free language courses & training, the possibility of international assignments, internal position changes and the chance to shape your own career. Excellent benefits & flexibility: competitive salary and cafeteria package, annual bonus, Private Medical Services, Employee Assistance Program, All You Can Move Sportpass, coaching / mentoring opportunity, buddy program, team buildings, holiday party. We also ensure flexibility, to help you find your balance: home office is a common practice (2 office days/week on average, and we provide fully remote working conditions within Hungary). We create the opportunity for freedom in working, where your corporate and private life coexist in harmony A global inclusive community, where we learn from each other. At Roche, we cooperate, debate, make decisions, celebrate successes and have fun as a team. Thats what makes us Roche Please read the Data Privacy Notice for further information about how we handle your personal data related to the recruitment process: https://go.roche.com/dpn4candidates Who we are A healthier future drives us to innovate. Together, more than 100000 employees across the globe are dedicated to advance science, ensuring everyone has access to healthcare today and for generations to come. Our efforts result in more than 26 million people treated with our medicines and over 30 billion tests conducted using our Diagnostics products. We empower each other to explore new possibilities, foster creativity, and keep our ambitions high, so we can deliver life-changing healthcare solutions that make a global impact. Lets build a healthier future, together. Roche is an Equal Opportunity Employer. Show more Show less

Posted 1 month ago

Apply

9.0 - 13.0 years

0 Lacs

hyderabad, telangana

On-site

You will be leading data engineering activities on moderate to complex data and analytics-centric problems that have a broad impact and require in-depth analysis to achieve desired results. Your responsibilities will include assembling, enhancing, maintaining, and optimizing current data, enabling cost savings, and meeting project or enterprise maturity objectives. Your role will require an advanced working knowledge of SQL, Python, and PySpark. You should also have experience using tools like Git/Bitbucket, Jenkins/CodeBuild, and CodePipeline, as well as familiarity with platform monitoring and alerts tools. Collaboration with Subject Matter Experts (SMEs) is essential for designing and developing Foundry front-end applications with the ontology (data model) and data pipelines supporting these applications. You will be responsible for implementing data transformations to derive new datasets or create Foundry Ontology Objects necessary for business applications. Additionally, you will implement operational applications using Foundry Tools such as Workshop, Map, and/or Slate. Active participation in agile/scrum ceremonies (stand-ups, planning, retrospectives, etc.) is expected from you. Documentation plays a crucial role in this role, and you will create and maintain documentation describing data catalog and data objects. As applications grow in usage and requirements change, you will be responsible for maintaining these applications. A continuous improvement mindset is encouraged, and you will be expected to engage in after-action reviews and share learnings. Strong communication skills, especially in explaining technical concepts to non-technical business leaders, will be essential for success in this role.,

Posted 1 month ago

Apply

2.0 - 6.0 years

0 Lacs

punjab

On-site

You are a highly skilled front-end React.js developer with a minimum of 2 years of experience, proficient in using Redux, Hooks, Webpack, etc. Additionally, you are highly proficient in Node.js/Express and REST API development with a minimum of 2 years of experience. You have experience working with containers and container management, specifically Docker. In addition, you have RDBMS experience and the ability to write efficient SQL queries, having worked with databases such as Oracle SQL, PostgreSQL, MySQL, or SQL Server. You also have experience with cloud-native design and micro-services-based architecture patterns, as well as familiarity with NoSQL databases like MongoDB and DocumentDB. You are familiar with AWS services such as ECS, EKS, ECR, EC2, and S3, and can build custom Dockerfiles. You can implement CI/CD pipelines using tools like Bamboo, GIT (BitBucket), and CodePipeline. You are proficient in working in Linux environments and competent at scripting. Furthermore, you have experience with other programming languages like Python and Java. Communication is one of your strengths, with good written and verbal skills. If you have any questions, feel free to ask.,

Posted 1 month ago

Apply

2.0 - 5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Inviting applications for the role of Principal Consultant -MLOps Engineer! In this role, lead the automation and orchestration of our machine learning infrastructure and CI/CD pipelines on public cloud (preferably AWS). This role is essential for enabling scalable, secure, and reproducible deployments of both classical AI/ML models and Generative AI solutions in production environments. Responsibilities . Develop and maintain CI/CD pipelines for AI/GenAI models on AWS using GitHub Actions and CodePipeline. (Not Limited to) . Automate infrastructure provisioning using IAC. (Terraform, Bicep Etc) . Any cloud platform- Azure or AWS . Package and deploy AI/GenAI models on (SageMaker, Lambda, API Gateway). . Write Python scripts for automation, deployment, and monitoring. . Engaging in the design, development and maintenance of data pipelines for various AI use cases . Active contribution to key deliverables as part of an agile development team . Set up model monitoring, logging, and alerting (e.g., drift, latency, failures). . Ensure model governance, versioning, and traceability across environments. . Collaborating with others to source, analyse, test and deploy data processes . Experience in GenAI project Qualifications we seek in you! Minimum Qualifications experience with MLOps practices. . Degree/qualification in Computer Science or a related field, or equivalent work experience . Experience developing, testing, and deploying data pipelines Strong Python programming skills. Hands-on experience in deploying 2 - 3 AI/GenAI models in AWS. Familiarity with LLM APIs (e.g., OpenAI, Bedrock) and vector databases. . Clear and effective communication skills to interact with team members, stakeholders and end users Preferred Qualifications/ Skills . Experience with Docker-based deployments. . Exposure to model monitoring tools (Evidently, CloudWatch). . Familiarity with RAG stacks or fine-tuning LLMs. . Understanding of GitOps practices. . Knowledge of governance and compliance policies, standards, and procedures

Posted 2 months ago

Apply
Page 1 of 2
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies