Jobs
Interviews

17543 Terraform Jobs - Page 32

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 years

4 - 8 Lacs

Hyderābād

On-site

Position Title: Lead Product Security Engineer Reports To: Principal Security Architect As our Lead Product Security Engineer you’ll own threat modeling, secure‑by‑design guidance, and hands‑on engineering for an industry‑leading SaaS platform that powers automotive retail for millions of users. You’ll work autonomously, partner closely with our Application Security (AppSec) scanning team, and influence product teams across the company—from design through incident response. Working hours: Late‑shift schedule with ~4 hours daily overlap with US Mountain Time (e.g., 1 p.m. – 10 p.m. IST). Some flexibility is expected; we value outcomes over clock‑watching. Key Responsibilities : 1. Leadership & Strategy: Champion security culture and coach teams on secure product design Lead the development and implementation of CDK’s product security strategy Design and implement technology and processes supporting CDK’s product security strategy Effectively partner across security, technology, and business teams Provide technical security leadership to product teams Develop effective product security metrics and use them to drive improvements 2. Product Security Standards: Guide the development and continuous improvement of product security standards and guidelines in alignment with risk and compliance requirements Drive accurate measurement and reporting of CDK’s compliance with product security standards Drive adoption of product security standards across product, technology, and infrastructure teams 3. Product Security Architecture and Engineering: Lead and evolve product threat‑modeling practices (STRIDE, PASTA, attack trees, etc.) Guide development of secure product architecture practices across technology teams Develop repeatable engineering and automation patterns to enable “secure by default” design Solve challenging product and application security problems 4. Security Operations: Work with CDK Security Operations team to identify and enable detection for advanced application security problems Drive good development practices in orchestration and automation of macro response workflows Be a force multiplier in rare product security incident scenarios 5. Data-Driven Security: Help wrangle and correlate security data from multiple tools; prototype metrics, dashboards, or ML models that reveal real risk trends. Advise on data quality, cleansing, and correlation strategies. Required Qualifications: Education: Bachelor’s degree in Computer Science or Information Security , or an equivalent experience Experience: 8+ years overall in software / security engineering, including 5+ years focused on product or application security in complex SaaS or e‑commerce environments. Demonstrated ownership of threat modeling for modern cloud architectures (microservices, serverless, containers). Proven ability to drive security architecture and standards autonomously. Hands‑on experience with at least one major public cloud and IaC (Terraform, CloudFormation, ARM, etc.). Excellent written and verbal communication skills; able to translate deep technical issues into business‑focused recommendations. Nice‑to‑have: Prior work with data‑privacy or data‑protection regulations (GDPR, CCPA, DPDP India, etc.). Data science / analytics chops: experience cleaning, correlating, or modeling large security datasets. Strong software‑engineering background, especially in Python (automation, data pipelines, small tools). Familiarity with secure SDLC and AppSec scanning pipelines (SAST, DAST, SCA, container security). Experience mentoring or leading distributed teams. Why join us? Impact at scale – Your work secures a platform that processes billions of dollars in automotive transactions yearly. Autonomy & ownership – We hire experts and trust them to deliver. Global collaboration – Work with top engineers across India and North America, shaping security practices company‑wide. Growth – Influence adjacent initiatives in data security, metrics, and architecture alongside our Principal Security Architect. At CDK, we believe inclusion and diversity are essential in inspiring meaningful connections to our people, customers and communities. We are open, curious and encourage different views, so that everyone can be their best selves and make an impact. CDK is an Equal Opportunity Employer committed to creating an inclusive workforce where everyone is valued. Qualified applicants will receive consideration for employment without regard to race, color, creed, ancestry, national origin, gender, sexual orientation, gender identity, gender expression, marital status, creed or religion, age, disability (including pregnancy), results of genetic testing, service in the military, veteran status or any other category protected by law. Applicants for employment in the US must be authorized to work in the US. CDK may offer employer visa sponsorship to applicants.

Posted 6 days ago

Apply

0 years

7 - 9 Lacs

Hyderābād

On-site

Job description External JD format Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Senior Software Engineer In this role, you will: Must have experience of software delivery life cycle along with a passion for testing and delivery of quality products and automation. Exposure and awareness to other disciplines (Analysis, Testing, Operational Support) within the Network operations. Background of working with the Agile methodology and associated tool sets – Jira, Confluence, Service NOW etc. Evidence of having conceptual knowledge of networking technologies and protocols. Ability to work as part of a team to ensure team success in the form of value-add deliverables. Customer and service focused, representing Connectivity (Telecoms) globally. Requirements To be successful in this role, you should meet the following requirements: Awareness of networking (Physical and Virtual) concepts like Routing & Switching, Firewalls, Subnetting, VPNs. Strong experience working on Web/API Technologies. Strong experience in Python and Python testing best practices and good to have BASH scripting experience. Strong experience of automation/testing tools – Ansible, Terraform, GitHub (Version Control), Postman, Jenkins Awareness of Cloud technologies, including AWS, GCP and Azure Working experience with Linux OS Understanding of Networking concepts like Firewalls, Routing & Switching, NAT, Port, Subnetting, VPNs. You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India

Posted 6 days ago

Apply

10.0 years

2 - 5 Lacs

Hyderābād

On-site

Lead Cloud Network Engineer, Assurant-GCC, India Reporting to the VP of Cloud Planning, Infrastructure and Cloud Services (ICS), Lead Cloud Network Engineer is responsible for leading the design and build of public cloud networking connectivity design and automation. The scope of the role requires broad and in-depth knowledge of multiple networking technologies including SDN in public cloud, security and automaton. This is one of the key roles for the successful delivery successful delivery of the Enterprise Cloud as A service. The role holder will implement the overall design and build of public cloud connectivity from Assurant DCs and Internet, customer locations for the Enterprise cloud Platform delivery. Also owns the implementation of design and delivery of all Network services within the public cloud and automaton of these services working with automation and principal network Engineers. This position will be Hyderabad at our India location. What will be my duties and responsibilities in this job? Design, Engineer and Build patterns and Catalog of Enterprise and Cloud Networking -85% LAN, WAN, DNS, EVPN, VXLA, BGP based DC Network designs with public cloud connectivity from Assurant customer DCs. Owns and maintains Network engineering design, catalog of the multi cloud network and DC Network with fully automated "underlay” “overlay” designs. Implement the designs and provide network as a product and service including product support. Works with the DevSecOps team to define the requirements for catalog automation for network infrastructure and policy as code to automate the full lifecycle of the design patterns. Works with Information Security engineers to ensure public cloud networking patterns and are aligned to enterprise security requirements policies. Owns the detailed design patterns for tools and automation of IaaS and PaaS offerings, works with cross-functional teams of ICS engineers. Ensures all cloud network infrastructure engineering patterns are designed with appropriate level of proactive monitoring, resilience and dynamic failover to meet internal and client SLA requirements. Works with the cloud network operations team to ensure the smooth transition of new network patters, automaton and monitoring patterns into production. Firewall design and implementation. Stakeholder Engagement and Public Cloud Automation Roadmap Adoption – 15% Actively perform research to remain up to date around new public cloud network capabilities, and how these align to Assurant Enterprise Cloud strategy and opportunities. Works with enterprise architecture, principal cloud and security engineers and business unit IT teams to understand the future needs around new public cloud automation and integration. Ensures new engineered network capabilities to meet all cost, reliability, and scale requirements, as well as any physical hosting restrictions. Responsibility for work of others. (If applicable, indicate job titles and number of direct report to this position.) Works with team of principal engineers and lead engineers from ICS, Enterprise, and application architects. Cloud Network engineering scope will cross all components of network services integrating with security and other infrastructure services i.e., compute, storage, databases, and middleware. Financial Responsibility (Include any budget responsibility- expense management or revenue.) Develops catalog of cloud network and integration services that can be adopted across the enterprise keeping the cost effectiveness as a consideration. Enables affordable public cloud adoption and maintain availability through, secured network design along with the automation of network services. What are the requirements needed for this position? Overall Work Exp: 10+ Years Bachelor’s degree in engineering or computer science 7+ years of experience in network infrastructure engineering in senior roles, with experience in engineering, architecture, security, and automation specially in large public and private cloud environments. 2 Plus Experience in design and build of cloud related networking solutions, i.e., Direct connectivity to cloud providers, Firewall, segmentation and Zero Trust implementations. Enterprise Experience in design and delivery of network services as ‘Infrastructure as a code’. Proven experience in designing and building micro segmentation for security and compliance positioning. Proven experience in designing and building data center network solutions large scale modern data centers, public and private clouds. Demonstrated thought and technical leadership in the above areas. Proven experience of working in newly established organizations following agile practices Experience in technical leadership role in delivering multi-$M infrastructure investment cases, and delivering on the value proposition of “better, faster and cheaper” through automation for the entire life cycle Experience of engineering at high scale in at least 1 of the major 3 public Experience of developing infrastructure as code and working in DevSecOps practices. Experience in working with cross functional enterprise Centers of Excellence which include international resources. What other requirements that would be helpful to have? Experience with and understanding of the deliverables and value proposition of public cloud engineering. Experience with Ansible, Python, Terraform. Experience delivering business-centric, integrated technology solutions at an international scale Experience in working with matrix organization. Ability to organize and lead diverse groups toward common goals through influence. Tenure in operational roles across compute, storage, middleware, and network technologies. Experience in DR and BCP solutions and engineering. Strong understanding of corporate strategy (and how technology supports and enables) Deep understanding of technology trends and a broad knowledge of technology products and vendors. Experience influencing significant, positive change at different technical levels of an organization Experience establishing long-term, collaborative relationships at management and technical levels of an organization. Experience in developing cost benefit analysis around technology decisions. Enterprise-mindset: ability to identify opportunities that maximize benefit to the enterprise as a whole. Systems and conceptual thinking: ability to capture the key elements of a system into a simple abstraction that empowers good decisions. Critical and analytical thinking: ability to analyze multiple long and short-term competing factors to arrive at high-quality recommendations. Communication: ability to effectively communicate (written, verbal, and presentation skills) to multiple levels within the business and technology organizations Intellectual curiosity: ability to learn quickly and with enthusiasm. What other the Preferred Experience, Skills, and Knowledge? Preferred Experience and Knowledge Master’s degree in engineering or computer science with 5+ years of experience or Bachelor’s degree with 10 + years of working experience. Experience in multiple Fortune 500 companies, in different industries Experience of working with outsource partners, supporting business critical infrastructure services. Broad experience of multiple industries gained from working in engineering roles at different companies. Any posted application deadline that is blank on a United States role is a pipeline requisition, and we'll continue to collect applications on an ongoing basis. Any posted pay range considers a wide range of compensation factors, including candidate background, experience, and work location, while also allowing for salary growth within the position. Helping People Thrive in a Connected World Connect with us. Bring us your best work and your brightest ideas. And we’ll bring you a place where you can thrive. Learn more at jobs.assurant.com. For U.S. benefit information, visit myassurantbenefits.com. For benefit information outside the U.S., please speak with your recruiter. What’s the culture like at Assurant? Our unique culture is a big reason why talented people choose Assurant. Named a Best/Great Place to Work in 13 countries and awarded the Fortune America’s Most Innovative Companies recognition in 2023, we bring together top talent around the world. Although we have a wide variety of skills and experiences, we share common characteristics that are uniquely Assurant. A passion for service. An ability to innovate in practical ways. And a willingness to take chances. We call our culture The Assurant Way. Company Overview Assurant is a leading global business services company that supports, protects, and connects major consumer purchases. A Fortune 500 company with a presence in 21 countries, Assurant supports the advancement of the connected world by partnering with the world’s leading brands to develop innovative solutions and deliver an enhanced customer experience through mobile device solutions, extended service contracts, vehicle protection services, renters insurance, lender-placed insurance products, and other specialty products. Equal Opportunity Statement Assurant is an Equal Employment Opportunity employer and does not use or consider race, color, religion, sex, national origin, age, disability, veteran status, sexual orientation, gender identity, or any other characteristic protected by federal, state, or local law in employment decisions. Job Scam Alert Please be aware that during Assurant's application process, we will never ask for personal information such as your Social Security number, bank account details, or passwords. Learn more about what to look out for and how to report a scam here.

Posted 6 days ago

Apply

8.0 years

0 Lacs

India

On-site

At Medtronic you can begin a life-long career of exploration and innovation, while helping champion healthcare access and equity for all. You’ll lead with purpose, breaking down barriers to innovation in a more connected, compassionate world. A Day in the Life ET Nav DevOps for OS Maintenance Release About Medtronic Together, we can change healthcare worldwide. At Medtronic, we push the limits of what technology, therapies and services can do to help alleviate pain, restore health and extend life. We challenge ourselves and each other to make tomorrow better than yesterday. It is what makes this an exciting and rewarding place to be. We want to accelerate and advance our ability to create meaningful innovations - but we will only succeed with the right people on our team. Let’s work together to address universal healthcare needs and improve patients’ lives. Help us shape the future. Whatever your specialty or ambitions, you can make a difference at Medtronic - both in the lives of others and your career. Join us in our commitment to take healthcare Further, Together Job Description Required Knowledge and Experience: B. E/BTech.in CS, IT, EC Engineering, (or ME/MTech) 8-11 years of experience in managing DevOps. 3+ years of experience in customer-facing products and solution architecture Proven scripting skills (e.g., JavaScript, Python, Bash) Experience implementing and utilizing cloud monitoring and logging tools (e.g., CloudWatch) Working knowledge of deployment automation solutions / Infrastructure as Code (e.g., Terraform, CloudFormation, Puppet, Chef, Ansible) Hands-on experience designing and developing AWS services Experience building and maintaining large-scale, cloud- and container-based platforms (in IaaS and PaaS) using Docker, Kubernetes, Elastic Container Service, etc. Knowledge of DevOps CI/CD tooling (e.g., GitHub, GitLab, Code Deploy, Circle CI, Jenkins/Travis, etc.) Familiarity with security automation, Secure DevOps (e.g., SAST) Ability to advocate and implement the best practices and standard solutions Ability to manage your own learning and contribute to functional knowledge building Ability to work both independently and help other team members Preferred Qualifications: Experience in full-stack development (e.g., building modern JavaScript applications, writing and utilizing RESTful APIs, experience with database systems) is a plus Previous Medical Device domain experience. Experience in Digital Health application development Experience implementing applications and data services built on best practices for security and compliance (HIPAA, SOC II, etc....) Familiarity with healthcare specific technologies and data formats such as HL7 & FHIR Physical Job Requirements The above statements are intended to describe the general nature and level of work being performed by employees assigned to this position, but they are not an exhaustive list of all the required responsibilities and skills of this position. Benefits & Compensation Medtronic offers a competitive Salary and flexible Benefits Package A commitment to our employees lives at the core of our values. We recognize their contributions. They share in the success they help to create. We offer a wide range of benefits, resources, and competitive compensation plans designed to support you at every career and life stage. About Medtronic We lead global healthcare technology and boldly attack the most challenging health problems facing humanity by searching out and finding solutions. Our Mission — to alleviate pain, restore health, and extend life — unites a global team of 95,000+ passionate people. We are engineers at heart— putting ambitious ideas to work to generate real solutions for real people. From the R&D lab, to the factory floor, to the conference room, every one of us experiments, creates, builds, improves and solves. We have the talent, diverse perspectives, and guts to engineer the extraordinary.

Posted 6 days ago

Apply

12.0 years

0 Lacs

India

On-site

At Medtronic you can begin a life-long career of exploration and innovation, while helping champion healthcare access and equity for all. You’ll lead with purpose, breaking down barriers to innovation in a more connected, compassionate world. A Day in the Life About Medtronic Together, we can change healthcare worldwide. At Medtronic, we push the limits of what technology, therapies and services can do to help alleviate pain, restore health and extend life. We challenge ourselves and each other to make tomorrow better than yesterday. It is what makes this an exciting and rewarding place to be. We want to accelerate and advance our ability to create meaningful innovations - but we will only succeed with the right people on our team. Let’s work together to address universal healthcare needs and improve patients’ lives. Help us shape the future. Whatever your specialty or ambitions, you can make a difference at Medtronic - both in the lives of others and your career. Join us in our commitment to take healthcare Further, Together Job Description Responsibilities Work closely with Tech Leads and developers of various teams to develop automation pipelines and CI/CD Should be able to act as Tech Lead and mentor for DevOps Engineering Team Manage and optimize the infrastructure and operations to support the security and reliability of APIs Configure the infrastructure using IaaS/PaaS products and own developed tools Develop self-serve tools and artefacts (e.g., containerized dev environment) to develop teams to support SDLC Strong oral and written communication skills Experience with configuration management tools Proficiency working in a team environment Demonstrated skills in writing engineering documents (specifications, project plans, et). Required Knowledge and Experience: B. E/BTech. in CS, IT, EC Engineering 12+ years of experience, including 5+ years of experience in DevOps, (high/low-level) customer-facing products. Proven solution architecture and designing a DevOps infra setup and managing experience in AWS, Azure DevOps. Proven scripting skills (e.g., JavaScript, Python, Bash) Experience implementing and utilizing cloud monitoring and logging tools (e.g., CloudWatch) Working knowledge of deployment automation solutions / Infrastructure as Code (e.g., Terraform, CloudFormation, Puppet, Chef, Ansible) Hands-on experience designing and developing AWS services Experience building and maintaining large-scale, cloud- and container-based platforms (in IaaS and PaaS) using Docker, Kubernetes, Elastic Container Service, etc. Knowledge of DevOps CI/CD tooling (e.g., GitHub, GitLab, Code Deploy, Circle CI, Jenkins/Travis, etc.) Familiarity with security automation, Secure DevOps (e.g., SAST static application security testing) Experience as a DevOps engineer or an SRE on a cross-functional Agile team is preferred Ability to advocate and implement the best practices and standard solutions Ability to manage your own learning and contribute to functional knowledge building Ability to work both independently and help other team members Principal Working Relationship Reports to the Engineering Manager The Principal DevOps Engineer frequently interacts with Product Owner, Tech Lead, other developers, V&V engineers, internal partners and stakeholders concerning estimations, design, implementation or requirement clarifications, works closely with global sites. Preferred Qualifications: Experience in full-stack development (e.g., building modern JavaScript applications, writing and utilizing RESTful APIs, experience with database systems) is a plus Previous Medical Device domain experience Experience in Digital Health application development Experience implementing applications and data services built on best practices for security and compliance (HIPAA, SOC II, etc.) Familiarity with healthcare specific technologies and data formats such as HL7 & FHIR Physical Job Requirements The above statements are intended to describe the general nature and level of work being performed by employees assigned to this position, but they are not an exhaustive list of all the required responsibilities and skills of this position. Benefits & Compensation Medtronic offers a competitive Salary and flexible Benefits Package A commitment to our employees lives at the core of our values. We recognize their contributions. They share in the success they help to create. We offer a wide range of benefits, resources, and competitive compensation plans designed to support you at every career and life stage. About Medtronic We lead global healthcare technology and boldly attack the most challenging health problems facing humanity by searching out and finding solutions. Our Mission — to alleviate pain, restore health, and extend life — unites a global team of 95,000+ passionate people. We are engineers at heart— putting ambitious ideas to work to generate real solutions for real people. From the R&D lab, to the factory floor, to the conference room, every one of us experiments, creates, builds, improves and solves. We have the talent, diverse perspectives, and guts to engineer the extraordinary.

Posted 6 days ago

Apply

7.0 years

0 Lacs

Hyderābād

On-site

EPAM is a leading global provider of digital platform engineering and development services. We are committed to having a positive impact on our customers, our employees, and our communities. We embrace a dynamic and inclusive culture. Here you will collaborate with multi-national teams, contribute to a myriad of innovative projects that deliver the most creative and cutting-edge solutions, and have an opportunity to continuously learn and grow. No matter where you are located, you will join a dedicated, creative, and diverse community that will help you discover your fullest potential. We are seeking an experienced Lead Platform Engineer to join our Automation Engineering team. The ideal candidate will excel in cloud infrastructure automation, generative AI, and machine learning, with a strong foundation in DevOps practices and modern scripting tools. This role involves designing cutting-edge AI-driven solutions for AIOps while innovating cloud automation processes to optimize operational efficiency. Responsibilities Design and develop automated workflows for cloud infrastructure provisioning using IaC tools like Terraform Build frameworks to support deployment, configuration, and management across diverse cloud environments Develop and manage service catalog components, ensuring integration with platforms like Backstage Implement GenAI models to enhance service catalog functionality and code quality across automation pipelines Design and implement CI/CD pipelines and maintain CI pipeline code for cloud automation use cases Write scripts to support cloud deployment orchestration using Python, Bash, or other scripting languages Design and deploy generative AI models for AIOps applications such as anomaly detection and predictive maintenance Work with frameworks like LangChain or cloud platforms such as Bedrock, Vertex AI, and Azure AI to deploy RAG workflows Build and optimize vector databases and document sources using tools like OpenSearch, Amazon Kendra, or equivalent solutions Prepare and label data for generative AI models, ensuring scalability and integrity Create agentic workflows using frameworks like Langraph or cloud GenAI platforms such as Bedrock Agents Integrate generative AI models with operational systems and AIOps platforms for enhanced automation Evaluate AI model performance and ensure continuous optimization over time Develop and maintain MLOps pipelines to monitor and mitigate model decay Collaborate with cross-functional teams to drive innovation and improve cloud automation processes Research and recommend new tools and best practices to enhance operational efficiency Requirements Bachelor's or Master's degree in Computer Science, Engineering, or a related field 7+ years of experience in cloud infrastructure automation, scripting, and DevOps Strong proficiency in IaC tools like Terraform, CloudFormation, or similar Expertise in Python, cloud AI frameworks such as LangChain, and generative AI workflows Demonstrated background in developing and deploying AI models such as RAG or transformers Proficiency in building vector databases and document sources using solutions like OpenSearch or Amazon Kendra Competency in preparing and labeling datasets for AI models and optimizing data inputs Familiarity with cloud platforms including AWS, Google Cloud, or Azure Capability to implement MLOps pipelines and monitor AI system performance Nice to have Knowledge of agentic architectures such as React and flow engineering techniques Background in using Bedrock Agents or Langraph for workflow creation Understanding of integrating generative AI into legacy or complex operational systems We offer Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Unlimited access to LinkedIn learning solutions Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: Health benefits Retirement benefits Paid time off Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc.)

Posted 6 days ago

Apply

3.0 - 8.0 years

0 Lacs

Hyderābād

On-site

EPAM is a leading global provider of digital platform engineering and development services. We are committed to having a positive impact on our customers, our employees, and our communities. We embrace a dynamic and inclusive culture. Here you will collaborate with multi-national teams, contribute to a myriad of innovative projects that deliver the most creative and cutting-edge solutions, and have an opportunity to continuously learn and grow. No matter where you are located, you will join a dedicated, creative, and diverse community that will help you discover your fullest potential. We are seeking a skilled Platform Engineer to join our Automation Engineering team, bringing expertise in cloud infrastructure automation, DevOps, scripting, and advanced AI/ML practices. The role focuses on integrating generative AI into automation workflows, enhancing operational efficiency, and supporting cloud-first initiatives. Responsibilities Design cloud automation workflows using Infrastructure-as-Code tools such as Terraform or CloudFormation Build scalable frameworks to manage infrastructure provisioning, deployment, and configuration across multiple cloud platforms Create service catalog components compatible with automation platforms like Backstage Integrate generative AI models to improve service catalog functionalities, including automated code generation and validation Architect CI/CD pipelines for automated build, test, and deployment processes Maintain deployment automation scripts utilizing technologies such as Python or Bash Implement generative AI models (e.g., RAG, agent-based workflows) for AIOps use cases like anomaly detection and root cause analysis Employ AI/ML tools such as LangChain, Bedrock, Vertex AI, or Azure AI for advanced generative AI solutions Develop vector databases and document sources using services like Amazon Kendra, OpenSearch, or custom solutions Engineer data pipelines to stream real-time operational insights that support AI-driven automation Build MLOps pipelines to deploy and monitor generative AI models, ensuring optimal performance and avoiding model decay Select appropriate LLM models for specific AIOps use cases and integrate them effectively into workflows Collaborate with cross-functional teams to design and refine automation and AI-driven processes Research emerging tools and technologies to enhance operational efficiency and scalability Requirements Bachelor's or Master's degree in Computer Science, Engineering, or related field 3-8 years of experience in cloud infrastructure automation, DevOps, and scripting Proficiency with Infrastructure-as-Code tools such as Terraform or CloudFormation Expertise in Python and generative AI frameworks like RAG and agent-based workflows Knowledge of cloud-based AI services, including Bedrock, Vertex AI, or Azure AI Familiarity with vector databases like Amazon Kendra, OpenSearch, or custom database solutions Competency in data engineering tasks such as feature engineering, labeling, and real-time data streaming Proven track record in creating and maintaining MLOps pipelines for AI/ML models in production environments Nice to have Background in Flow Engineering tools such as Langraph or platform-specific workflow orchestration tools Understanding of comprehensive AIOps processes to refine cloud-based automation solutions We offer Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Unlimited access to LinkedIn learning solutions Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: Health benefits Retirement benefits Paid time off Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc.)

Posted 6 days ago

Apply

4.0 - 8.0 years

0 Lacs

Hyderābād

On-site

Are you seeking an environment where you can drive innovation? Does the prospect of working with top engineering talent get you charged up? Apple is a place where extraordinary people gather to do their best work. Together we create products and experiences people once couldn’t have imagined - and now can’t imagine living without. Apple’s IS&T manages key infrastructure at Apple - how online orders are placed, the customer experience with technology in our retail stores, how much network capacity we need around the world and much more. The SAP Global Systems team within IS&T runs the Operations and Financial transactional platform that powers all of Apple functions like Sales, Manufacturing, Distribution and Financials. Think platform-as-product! Our team delivers great developer experiences to our Program, Project and Development teams through curated set of tools, capabilities and processes offered through our Internal Developer Platform. We automate infrastructure operations, support complex service abstractions, build flexible workflows and curate a frictionless ecosystem that enables end-to-end collaboration to help drive productivity and engineering velocity. This is a tremendous opportunity for someone who has the skill to own initiatives and a passion to work in a highly coordinated global solution platform! Join us in crafting solutions that do not yet exist! Description As a Cloud Platform Engineer at Apple, you will be a key contributor to the design, development, and operation of our next-generation cloud platform. You will work alongside a team of dedicated engineers to build a highly scalable, reliable, and secure platform that empowers Apple's product teams to deliver extraordinary experiences. You will be responsible for driving innovation, adopting new technologies, and ensuring the platform meets the evolving needs of Apple's business. RESPONSIBILITIES: - Architect, Design and implement robust cloud native solutions. - Implement the API Led and Event driven Solutions across SAP and Non SAP Cloud Platform. - Implement and design standard processes for security concepts that are critical for cloud native applications. - Have hands-on understanding of containerization and orchestration concepts for designing and building scalable and resilient modern event and micro-services based systems - Collaborate with multi-functional teams to design and implement secure and robust application architectures for performance, scalability, and cost-efficiency. - Understands and uses monitoring, logging, and alerting solutions to continuously assess and improve system reliability, and performance - Have passion to drive automation to streamline manual processes and enhance productivity across the organization. - Stay up-to-date with emerging technologies, industry trends, and standard processes in DevOps and cloud computing. Minimum Qualifications 4 - 8 years of Experience in the relevant field. Bachelor's degree or equivalent experience in Computer Science, Engineering or other relevant major. Knowledge of working with public cloud providers such as AWS or GCP Understanding of networking concepts on Cloud, like VPCs/Subnets, Firewalls and Load Balancers. Experience in CI/CD and configuration management systems Familiarity with Kubernetes or Kyma Runtime. Understanding of cloud security principles Preferred Qualifications Strong expertise on Cloud native applications. A strong sense of ownership. Good critical thinking & interpersonal skills to work successfully across diverse business and technical & multi-functional teams. Understanding of SAP BTP. Understand complex landscape architectures. Have working knowledge of on-prem and cloud based hybrid architectures and infrastructure concepts of Regions, Availability Zones, VPCs/Subnets, Load Balancers, API Gateways etc. Strong understanding of common authentication schemes, certificates, secrets and protocols. Experience on IAC like Terraform / CloudFormation. Scripting and/or coding skills needed for automation, triaging and troubleshooting . Experience on any of these scripting Python, Go, Java etc. Certifications like AWS Solutions Architect, DevOps Professional, GCP Professional Architect, SAP BTP Certification is a plus. Submit CV

Posted 6 days ago

Apply

5.0 years

3 - 7 Lacs

Hyderābād

On-site

Company Profile : LSEG (London Stock Exchange Group) is a world-leading financial markets infrastructure and data business. We are dedicated, open-access partners with a dedication to perfection in delivering services across Data & Analytics, Capital Markets, and Post Trade. Backed by three hundred years of experience, innovative technologies, and a team of over 23,000 people in 70 countries, our purpose is driving financial stability, empowering economies, and enabling customers to create sustainable growth. Working in partnership with Tata Consultancy Services (TCS), we are excited to expand our tech centres of perfection in India, by building a new global center, right here in the heart of Hyderabad. Role Profile : As a Sr AWS Developer, you will participate in all aspects of the software development lifecycle which includes estimating, technical design, implementation, documentation, testing, deployment, and support of application developed for our clients. As a member working in a team environment, you will work with solution architects and developers on interpretation/translation of wireframes and creative designs into functional requirements, and subsequently into technical design. Key Responsibilities : 5+ years experience in design, build, and maintain robust, scalable and efficient ETL pipelines using Python and Spark. Develop workflows demonstrating AWS services such as Glue, Glue Data Catalog, Lambda, S3 and EMR Serverless, API Gateway. Implement data quality frameworks and governance practices to ensure reliable data processing. Optimize existing workflows and drive transformation of the data from multiple sources. Monitor system performance and ensure data reliability through proactive optimizations. Chip in to technical discussions, and deliver high-quality solutions. Essential/ Must-Have Skills : Hands-on experience with AWS Glue, Glue Data Catalog, Lambda, S3, and EMR, EMR Serverless, API Gateway, SNS, SQS, CloudWatch, CloudFormation and CloudFront. Strong understanding of data quality frameworks, governance practices and scalable architectures. Practical knowledge of integration of different data sources and transformation of it. Agile methodology experience, including sprint planning and retrospectives. Excellent interpersonal skills for articulating technical solutions to diverse team members. Experience in additional programming languages such as Java, Node. JS. Experience with Java, Terraform, Ansible, Python, pyspark etc. Knowledge of Tools such as Kafka, Datadog, GitLab, Jenkins, Docker, Kubernetes. Desirable Skills : Nice to have AWS Certified Developer or AWS Certified Solutions Architect. Experience with serverless computing paradigms. LSEG is a leading global financial markets infrastructure and data provider. Our purpose is driving financial stability, empowering economies and enabling customers to create sustainable growth. Our purpose is the foundation on which our culture is built. Our values of Integrity, Partnership, Excellence and Change underpin our purpose and set the standard for everything we do, every day. They go to the heart of who we are and guide our decision making and everyday actions. Working with us means that you will be part of a dynamic organisation of 25,000 people across 65 countries. However, we will value your individuality and enable you to bring your true self to work so you can help enrich our diverse workforce. You will be part of a collaborative and creative culture where we encourage new ideas and are committed to sustainability across our global business. You will experience the critical role we have in helping to re-engineer the financial ecosystem to support and drive sustainable economic growth. Together, we are aiming to achieve this growth by accelerating the just transition to net zero, enabling growth of the green economy and creating inclusive economic opportunity. LSEG offers a range of tailored benefits and support, including healthcare, retirement planning, paid volunteering days and wellbeing initiatives. We are proud to be an equal opportunities employer. This means that we do not discriminate on the basis of anyone’s race, religion, colour, national origin, gender, sexual orientation, gender identity, gender expression, age, marital status, veteran status, pregnancy or disability, or any other basis protected under applicable law. Conforming with applicable law, we can reasonably accommodate applicants' and employees' religious practices and beliefs, as well as mental health or physical disability needs. Please take a moment to read this privacy notice carefully, as it describes what personal information London Stock Exchange Group (LSEG) (we) may hold about you, what it’s used for, and how it’s obtained, your rights and how to contact us as a data subject . If you are submitting as a Recruitment Agency Partner, it is essential and your responsibility to ensure that candidates applying to LSEG are aware of this privacy notice.

Posted 6 days ago

Apply

3.0 years

1 - 3 Lacs

Hyderābād

On-site

Join us to lead technology support in a dynamic environment, enhancing your career with growth opportunities. Job Summary As a Technology Support Lead at JPMorgan Chase within the Consumer & Community Banking division, you will play a pivotal leadership role in maintaining the operational stability, availability, and performance of our production services. Your responsibilities will include identifying, troubleshooting, and resolving issues to guarantee a seamless user experience. Job Responsibilities Provide end-to-end application and infrastructure service delivery for successful business operations. Execute policies and procedures ensuring engineering and operational stability. Monitor production environments for anomalies and address issues using standard observability tools. Escalate and communicate issues and solutions to business and technology stakeholders. Lead incident, problem, and change management in support of full stack technology systems. Required Qualifications, Capabilities, and Skills Formal training or certification on software engineering concepts and 3+ years applied experience Proficiency on AWS Cloud Platform, with system design, application development, testing, and operational stability. Hands-on experience with infrastructure as code tools, such as Terraform and Helm chart. Experience in designing, deploying, and managing Kubernetes clusters across various environments. Minimum 4+ years of experience in Kubernetes, Terraform, Python, and Shell scripting technologies. Experience with Continuous Integration and Delivery tools like Jenkins. Preferred Qualifications, Capabilities, and Skills Ability to lead by example and guide the team with technical expertise. Ability to identify risks/issues for the project and manage them accordingly. Experience with PostgreSQL, AWS RDS, Aurora, or Teradata preferred.

Posted 6 days ago

Apply

6.0 years

29 Lacs

Hyderābād

On-site

Requirement: 1. Cloud: (Mandatory): Proven technical experience with AWS or Azure, scripting, Migration and automation Hands-on knowledge of services and implementation such as Landing Zone, Centralized Networking (AWS Transit Gateway / Azure Virtual WAN), Serverless (AWS Lambda / Azure Functions), EC2 / Virtual Machines, S3 / Blob Storage, VPC / Virtual Network, IAM, SCP/Azure Policies, Monitoring(CloudWatch / Azure Monitor), SecOps, FinOps, etc. Experience with migration strategies and tools such as AWS MGN, Database Migration Services, Azure Migrate. Experience in scripting languages such as Python, Bash, Ruby, Groovy, Java, JavaScript 2. Automation (Mandatory): Hands-on experience with Infrastructure as Code automation (IaC) and Configuration Management tools such as: Terraform, CloudFormation, Azure ARM, Bicep, Ansible, Chef, or Puppet 3. CI/CD (Mandatory): Hands-on experience in setting up or developing CI/CD pipelines using any of the tools such as (Not Limited To): GitHub Actions, GitLab CI, Azure DevOps, Jenkins, AWS CodePipeline 4. Containers & Orchestration (Good to have): Hands-on experience in provisioning and managing containers and orchestration solutions such as: Docker & Docker Swarm Kubernetes (Private\Public Cloud platforms) OpenShift Helm Charts Certification Expectations 1. Cloud: Certification (Mandatory, any of): AWS Certified SysOps Administrator – Associate AWS Certified Solutions Architect – Associate AWS Certified Developer – Associate Any AWS Professional/Specialty certification(s) 2. Automation: (Optional, any of): RedHat Certified Specialist in Ansible Automation HashiCorp Terraform Certified Associate 3. CI-CD: (Optional) GitLab Certified CI/CD Associate GitHub Actions Certification 4. Containers & Orchestration (Optional, any of): CKA (Certified Kubernetes Administrator) RedHat Certified Specialist in OpenShift Administration Responsibilities: Lead architecture and design discussions with architects and clients. Understanding of technology best practices and AWS frameworks such as “Well- Architected Framework” Implementing solutions with an emphasis on Cloud Security, Cost Optimization, and automation Manage customer engagement and Lead teams to deliver high-quality solutions on time Identify work opportunities and collaborate with leadership to grow accounts Own project delivery to ensure successful outcomes and positive customer experiences. Ability to initiate proactive meetings with Leads and extended teams to highlight any gaps/delays or other challenges. Subject Matter Expert in technology. Ability to train\mentor the team in functional and technical skills. Ability to decide and provide adequate help on the career progression of people. Support to the application team – Work with application development teams to design, implement and where necessary, automate infrastructure on cloud platforms Continuous improvement - Certain engagements will require you to support and maintain existing cloud environments with an emphasis on continuously innovating through automation and enhancing stability/availability through monitoring and improving the security posture Drive internal practice development initiatives to promote growth and innovation within the team. Contribute to internal assets such as technical documentation, blogs, and reusable code components. Job Types: Full-time, Permanent Pay: Up to ₹2,900,000.00 per year Experience: total: 6 years (Required) Work Location: In person

Posted 6 days ago

Apply

0 years

0 Lacs

Hyderābād

On-site

We are Progress (Nasdaq: PRGS) - a trusted provider of software that enables our customers to develop, deploy and manage responsible, AI powered applications and experiences with agility and ease. We’re proud to have a diverse, global team where we value the individual and enrich our culture by considering varied perspectives because we believe people power progress. Join us as a Site Reliability Engineer in our Product Operations division in Hyderabad and help us do what we do best: propelling business forward. In this role, you will work on: Data Security and Compliance: Protect systems from data breaches, prioritizing data security. Ensure compliance with PCI-DSS, HIPAA, SOC2, and other compliance policies, standards, and procedures. Participate in the quarterly, bi-yearly, and yearly audit compliance activities. Infrastructure and Security Services: Build and maintain reliable infrastructure and security services for highly available and scalable services by utilizing native Azure/AWS/GCP infrastructure services from Azure/AWS/GCP and other industry leading tools. System Administration and Automation: Perform basic system administration tasks such as configuring servers, setting up HA/DR, automating routine tasks, and backup/restore procedures. Implement automation to minimize manual work and achieve security and compliance objectives. Automation and Tooling: Develop and maintain automation frameworks, tools, and processes to streamline operations and improve efficiency. Champion the adoption of infrastructure as code (IaC) principles for configuration management and deployment automation. Performance Optimization: Analyze system performance and identify opportunities for optimization and efficiency improvements. Implement performance tuning strategies to enhance system reliability and scalability. Monitoring and Observability: Design and implement comprehensive monitoring and observability solutions to proactively identify and address system issues. Utilize advanced monitoring tools and techniques to gain insights into system behavior and performance. Incident Management and Postmortems: Participate in incident management processes, ensuring timely resolution of incidents and minimizing impact on users. Conduct postmortem reviews to identify root causes and implement preventive measures to mitigate future incidents. Capacity Planning and Forecasting: Perform capacity planning and forecasting to anticipate resource requirements and ensure adequate scalability. Develop strategies for optimizing resource utilization and cost-effectiveness. On-call Support and Troubleshooting: Serve on the on-call team, acting as an escalation contact for service incidents. Troubleshoot and resolve issues related to application development, deployment, and operations. Work with Technical Support to troubleshoot customer issues. Collaboration and Agile Support: Work collaboratively with agile software development teams, providing support to developers, QA, and technical support. Collaborate with other team members during our planned scheduled maintenance windows. Customer Account Provisioning: Provision new customer accounts, including handling complex orders in coordination with Progress Sales/Professional Services. Collaborate with other Engineering teams to support services before they go live through activities such as system design consulting, developing software platforms and frameworks, capacity planning, and launch reviews. High-Availability Deployments: Implement automated high-availability deployments, ensuring system reliability and uptime. End-to-End Solution Understanding: Become proficient in understanding how each software component, system design, and configuration are linked to form an end-to-end solution. Your background: Experience: Proven experience as a Site Reliability Engineer (or similar position) in a production capacity. You understand what it means to operate infrastructure as code and have experience developing services and automation to do so. Chef knowledge would be a plus. You have a great ability to debug and optimize code and automate routine tasks to eliminate toil. You have a systematic problem-solving approach, coupled with strong communication skills and a sense of ownership, initiative, grit, and drive. You have designed and implemented applications and systems that scale, are resilient to failure, and are observable. Technical Expertise: Strong understanding of Windows, Linux, automation tools (Terraform, Ansible, Chef, or Puppet), Azure/AWS services (ECS, EKS, S3, and more), and scripting languages (Shell, Python, PowerShell, or others). Knowledge of databases (Azure SQL, Postgres/RDS, Graph databases), Service Mesh (Linkerd or Envoy), API gateways, authentication services, 3rd party integrations, and more. Proficient in managing containerized environments using Kubernetes, Docker, and Rancher, along with other related tools and technologies. Security Knowledge: Familiarity with security concepts, including cloud authentication, authorization, web attacks, and environment security. Experience with network concepts, including TCP/IP, HTTP, and TLS. Cloud Experience: Experience with cloud-hosted apps/services (Azure/AWS preferred) and translating business requirements into securely implemented capabilities in the cloud. Education: Bachelor’s degree in computer science, Information Systems, or a related field. Compliance and Communication: Proven ability to adhere to policies, standards, and procedures related to change control and operational best practices. Strong written and verbal communication skills for both technical and non-technical audiences. Flexible and Proactive: Willingness to be flexible in responding to customer issues and ability to identify product/deployment improvements for future mitigation. You are interested in designing, analyzing, and troubleshooting large-scale distributed systems. Regulatory Compliance: Experience with PCI, HIPAA, and SOC2 compliance. Must be willing to work in US time zone [4:30 PM to 1:30AM IST] If this sounds like you and fits your experience and career goals, we’d be happy to chat. What we offer in return is the opportunity to experience a great company culture with wonderful colleagues to learn from and collaborate with, and also to enjoy: Compensation Competitive remuneration package Employee Stock Purchase Plan Enrolment Vacation, Family, and Health 30 days of earned leave An extra day off for your birthday Various other leaves like marriage leave, casual leave, maternity leave, and paternity leave Premium Group Medical Insurance for employees and five dependents, personal accident insurance coverage, and life insurance coverage Professional development reimbursement Interest subsidy on loans - either vehicle or personal loans. Apply now! #LI-SR1 #LI-Hybrid

Posted 6 days ago

Apply

4.0 years

0 Lacs

Gurgaon

On-site

Job Description: Job Description POSITION RESPONSIBILITES Monitor the ServiceNow ticket queue and event monitoring tools (Zenoss) for incoming incidents & requests Perform initial investigation and/or troubleshooting of systems (windows/ Linux/ AWS) and network issues to resolve issue basis SOPs available Process all support incidents and Task requests within SLA by following procedural requirements Escalate to secondary support teams in timely manner, where necessary, to ensure timely resolution Thoroughly document steps taken to resolve or escalate incidents within ServiceNow tickets Participate in Bridge calls to help resolve system outages and restore service to users and Guardian partners Identify and address repeating alert trends or non-actionable alerts to streamline and optimize services Suggest defects and product/infrastructure enhancements to improve stability and automation Perform Incident management based on ITIL principles Participate in periodic skills enhancement sessions and training courses Prepare and deliver standard scheduled reports to support service trending and optimization Develop, document and update standard operating procedures and knowledgebase articles. REPORTING RELATIONSHIPS This position reports to the EOC Manager. CANDIDATE QUALIFICATIONS Functional Skills EOC team needs to perform on 4 Technologies primarily, and candidate needs to one expertise in 1 of these and working knowledge in others: The technologies are: Windows Server Administration Linux and Unix Server Administration Network Administration and Telecom services AWS DevOps Working knowledge of the following industry standard technologies is required for this role, including: Server Hardware (Cisco UCS, IBM P-Series) Cloud Technologies (Amazon Web Services (AWS) Core Services, Terraform, Security Groups, Jenkins) Citrix Microsoft Active Directory Networking (TCP/IP, QIP (DNS), Wireless, F5, Riverbed) Security (Anti-virus (Trend Micro, Symantec), SSL Certificate Management) Strong experience working with ticketing tools such as ServiceNow, Zenoss or any other monitoring tool, Cloud monitoring tools (CloudWatch, CloudTrail), AppDynamics (or similar APM tool) Strong problem-solving and troubleshooting skills Keen analytical and structured approach to problem solving Ability to follow instructions and Standard Operating Procedures (SOPs) Excellent written and spoken English language skills with an ability to speak loudly and clearly Outstanding customer service skills and dedication to customer satisfaction Excellent documentation skills Proven ability to work independently Ability to work well in a team environment Ability to accommodate flexible work schedules Ability to triage outage bridge calls and drive calls to closure. Comfortable with “crisis” situations that require critical thinking, problem definition and diagnosis skills Ability to speak confidently with Developers, Engineers and Management Leadership Behaviors Take ownership & accountability for actions and results Takes action to resolve customer problems promptly & to ensure customer satisfaction Demonstrates high standards of professionalism, integrity & customer service POSITION QUALIFICATIONS Total of 4 years+ experience including a minimum of 2 years of experience in a 24x7 Network Operations Center & Service Management role Strong Microsoft Word, Excel, PowerPoint skills Bachelor’s Degree or similar required A +, Network +, Security +, Microsoft, Cisco Certifications preferred Flexibility to work in 24x7x365 shifts on rotational basis Must be comfortable working in a highly critical, fast paced environment with shifting priorities The EOC is available 24x7x365 and requires onsite coverage. Shifts can vary across a 24-hour clock. Shifts may change periodically to vary work days. Location: This position can be based in any of the following locations: Chennai, Gurgaon Current Guardian Colleagues: Please apply through the internal Jobs Hub in Workday

Posted 6 days ago

Apply

7.0 years

0 Lacs

Gurgaon

On-site

EPAM is a leading global provider of digital platform engineering and development services. We are committed to having a positive impact on our customers, our employees, and our communities. We embrace a dynamic and inclusive culture. Here you will collaborate with multi-national teams, contribute to a myriad of innovative projects that deliver the most creative and cutting-edge solutions, and have an opportunity to continuously learn and grow. No matter where you are located, you will join a dedicated, creative, and diverse community that will help you discover your fullest potential. We are seeking an experienced Lead Platform Engineer to join our Automation Engineering team. The ideal candidate will excel in cloud infrastructure automation, generative AI, and machine learning, with a strong foundation in DevOps practices and modern scripting tools. This role involves designing cutting-edge AI-driven solutions for AIOps while innovating cloud automation processes to optimize operational efficiency. Responsibilities Design and develop automated workflows for cloud infrastructure provisioning using IaC tools like Terraform Build frameworks to support deployment, configuration, and management across diverse cloud environments Develop and manage service catalog components, ensuring integration with platforms like Backstage Implement GenAI models to enhance service catalog functionality and code quality across automation pipelines Design and implement CI/CD pipelines and maintain CI pipeline code for cloud automation use cases Write scripts to support cloud deployment orchestration using Python, Bash, or other scripting languages Design and deploy generative AI models for AIOps applications such as anomaly detection and predictive maintenance Work with frameworks like LangChain or cloud platforms such as Bedrock, Vertex AI, and Azure AI to deploy RAG workflows Build and optimize vector databases and document sources using tools like OpenSearch, Amazon Kendra, or equivalent solutions Prepare and label data for generative AI models, ensuring scalability and integrity Create agentic workflows using frameworks like Langraph or cloud GenAI platforms such as Bedrock Agents Integrate generative AI models with operational systems and AIOps platforms for enhanced automation Evaluate AI model performance and ensure continuous optimization over time Develop and maintain MLOps pipelines to monitor and mitigate model decay Collaborate with cross-functional teams to drive innovation and improve cloud automation processes Research and recommend new tools and best practices to enhance operational efficiency Requirements Bachelor's or Master's degree in Computer Science, Engineering, or a related field 7+ years of experience in cloud infrastructure automation, scripting, and DevOps Strong proficiency in IaC tools like Terraform, CloudFormation, or similar Expertise in Python, cloud AI frameworks such as LangChain, and generative AI workflows Demonstrated background in developing and deploying AI models such as RAG or transformers Proficiency in building vector databases and document sources using solutions like OpenSearch or Amazon Kendra Competency in preparing and labeling datasets for AI models and optimizing data inputs Familiarity with cloud platforms including AWS, Google Cloud, or Azure Capability to implement MLOps pipelines and monitor AI system performance Nice to have Knowledge of agentic architectures such as React and flow engineering techniques Background in using Bedrock Agents or Langraph for workflow creation Understanding of integrating generative AI into legacy or complex operational systems We offer Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Unlimited access to LinkedIn learning solutions Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: Health benefits Retirement benefits Paid time off Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc.)

Posted 6 days ago

Apply

3.0 - 7.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) – Senior – Senior Data Scientist Role Overview: We are seeking a highly skilled and experienced Senior Data Scientist with a minimum of 3 - 7 years of experience in Data Science and Machine Learning, preferably with experience in NLP, Generative AI, LLMs, MLOps, Optimization techniques, and AI solution Architecture. In this role, you will play a key role in the development and implementation of AI solutions, leveraging your technical expertise. The ideal candidate should have a deep understanding of AI technologies and experience in designing and implementing cutting-edge AI models and systems. Additionally, expertise in data engineering, DevOps, and MLOps practices will be valuable in this role. Responsibilities: Your technical responsibilities: Contribute to the design and implementation of state-of-the-art AI solutions. Assist in the development and implementation of AI models and systems, leveraging techniques such as Language Models (LLMs) and generative AI. Collaborate with stakeholders to identify business opportunities and define AI project goals. Stay updated with the latest advancements in generative AI techniques, such as LLMs, and evaluate their potential applications in solving enterprise challenges. Utilize generative AI techniques, such as LLMs, to develop innovative solutions for enterprise industry use cases. Integrate with relevant APIs and libraries, such as Azure Open AI GPT models and Hugging Face Transformers, to leverage pre-trained models and enhance generative AI capabilities. Implement and optimize end-to-end pipelines for generative AI projects, ensuring seamless data processing and model deployment. Utilize vector databases, such as Redis, and NoSQL databases to efficiently handle large-scale generative AI datasets and outputs. Implement similarity search algorithms and techniques to enable efficient and accurate retrieval of relevant information from generative AI outputs. Collaborate with domain experts, stakeholders, and clients to understand specific business requirements and tailor generative AI solutions accordingly. Conduct research and evaluation of advanced AI techniques, including transfer learning, domain adaptation, and model compression, to enhance performance and efficiency. Establish evaluation metrics and methodologies to assess the quality, coherence, and relevance of generative AI outputs for enterprise industry use cases. Ensure compliance with data privacy, security, and ethical considerations in AI applications. Leverage data engineering skills to curate, clean, and preprocess large-scale datasets for generative AI applications. Requirements: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. A Ph.D. is a plus. Minimum 3-7 years of experience in Data Science and Machine Learning. In-depth knowledge of machine learning, deep learning, and generative AI techniques. Proficiency in programming languages such as Python, R, and frameworks like TensorFlow or PyTorch. Strong understanding of NLP techniques and frameworks such as BERT, GPT, or Transformer models. Familiarity with computer vision techniques for image recognition, object detection, or image generation. Experience with cloud platforms such as Azure, AWS, or GCP and deploying AI solutions in a cloud environment. Expertise in data engineering, including data curation, cleaning, and preprocessing. Knowledge of trusted AI practices, ensuring fairness, transparency, and accountability in AI models and systems. Strong collaboration with software engineering and operations teams to ensure seamless integration and deployment of AI models. Excellent problem-solving and analytical skills, with the ability to translate business requirements into technical solutions. Strong communication and interpersonal skills, with the ability to collaborate effectively with stakeholders at various levels. Understanding of data privacy, security, and ethical considerations in AI applications. Track record of driving innovation and staying updated with the latest AI research and advancements. Good to Have Skills: Apply trusted AI practices to ensure fairness, transparency, and accountability in AI models and systems. Utilize optimization tools and techniques, including MIP (Mixed Integer Programming). Drive DevOps and MLOps practices, covering continuous integration, deployment, and monitoring of AI models. Implement CI/CD pipelines for streamlined model deployment and scaling processes. Utilize tools such as Docker, Kubernetes, and Git to build and manage AI pipelines. Apply infrastructure as code (IaC) principles, employing tools like Terraform or CloudFormation. Implement monitoring and logging tools to ensure AI model performance and reliability. Collaborate seamlessly with software engineering and operations teams for efficient AI model integration and deployment. Familiarity with DevOps and MLOps practices, including continuous integration, deployment, and monitoring of AI models. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 6 days ago

Apply

3.0 - 8.0 years

0 Lacs

Gurgaon

On-site

EPAM is a leading global provider of digital platform engineering and development services. We are committed to having a positive impact on our customers, our employees, and our communities. We embrace a dynamic and inclusive culture. Here you will collaborate with multi-national teams, contribute to a myriad of innovative projects that deliver the most creative and cutting-edge solutions, and have an opportunity to continuously learn and grow. No matter where you are located, you will join a dedicated, creative, and diverse community that will help you discover your fullest potential. We are seeking a skilled Platform Engineer to join our Automation Engineering team, bringing expertise in cloud infrastructure automation, DevOps, scripting, and advanced AI/ML practices. The role focuses on integrating generative AI into automation workflows, enhancing operational efficiency, and supporting cloud-first initiatives. Responsibilities Design cloud automation workflows using Infrastructure-as-Code tools such as Terraform or CloudFormation Build scalable frameworks to manage infrastructure provisioning, deployment, and configuration across multiple cloud platforms Create service catalog components compatible with automation platforms like Backstage Integrate generative AI models to improve service catalog functionalities, including automated code generation and validation Architect CI/CD pipelines for automated build, test, and deployment processes Maintain deployment automation scripts utilizing technologies such as Python or Bash Implement generative AI models (e.g., RAG, agent-based workflows) for AIOps use cases like anomaly detection and root cause analysis Employ AI/ML tools such as LangChain, Bedrock, Vertex AI, or Azure AI for advanced generative AI solutions Develop vector databases and document sources using services like Amazon Kendra, OpenSearch, or custom solutions Engineer data pipelines to stream real-time operational insights that support AI-driven automation Build MLOps pipelines to deploy and monitor generative AI models, ensuring optimal performance and avoiding model decay Select appropriate LLM models for specific AIOps use cases and integrate them effectively into workflows Collaborate with cross-functional teams to design and refine automation and AI-driven processes Research emerging tools and technologies to enhance operational efficiency and scalability Requirements Bachelor's or Master's degree in Computer Science, Engineering, or related field 3-8 years of experience in cloud infrastructure automation, DevOps, and scripting Proficiency with Infrastructure-as-Code tools such as Terraform or CloudFormation Expertise in Python and generative AI frameworks like RAG and agent-based workflows Knowledge of cloud-based AI services, including Bedrock, Vertex AI, or Azure AI Familiarity with vector databases like Amazon Kendra, OpenSearch, or custom database solutions Competency in data engineering tasks such as feature engineering, labeling, and real-time data streaming Proven track record in creating and maintaining MLOps pipelines for AI/ML models in production environments Nice to have Background in Flow Engineering tools such as Langraph or platform-specific workflow orchestration tools Understanding of comprehensive AIOps processes to refine cloud-based automation solutions We offer Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Unlimited access to LinkedIn learning solutions Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: Health benefits Retirement benefits Paid time off Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc.)

Posted 6 days ago

Apply

3.0 years

0 Lacs

Puducherry, India

On-site

We're Hiring! Backend Engineer (NestJS / Node.js) Location: Pondicherry / Chennai Employment Type: Full-time Experience: 3+ Years Department: Engineering / Backend Team Industry: Food Tech / E-commerce / SaaS Responsibilities ● Design, build, and maintain scalable backend services using NestJS and Node.js . ● Implement clean, testable REST APIs and microservices to support frontend, mobile, and third-party integrations. ● Integrate with MongoDB and MySQL databases effectively. ● Architect and build event-driven systems using Apache Kafka (or similar message brokers). ● Write reusable, modular, and performant code using Object-Oriented Programming (OOP) principles. ● Implement role-based access control , authentication (JWT, OAuth), and user management. ● Deploy and manage serverless functions using AWS Lambda and other AWS services. ● Collaborate with cross-functional teams: frontend, product, QA, and DevOps to deliver features end-to-end. ● Participate in code reviews, architecture discussions, and continuous improvement processes. Required Skills ● Strong proficiency in Node.js with NestJS framework. ● Experience with both MongoDB and MySQL (data modeling, query optimization). ● Solid understanding of microservices architecture and API versioning . ● Practical experience with Kafka or other event-streaming platforms. ● Working knowledge of AWS Lambda , API Gateway , and other AWS services . ● Deep understanding of OOP , design patterns, and MVC/MVVM architectures. ● Experience building and consuming RESTful APIs (GraphQL is a plus). ● Familiarity with CI/CD pipelines and containerization ( Docker ). Nice to Have ● Experience with frontend stack: React / Next.js (for better collaboration). ● Familiarity with testing frameworks (Jest, Mocha, Supertest). ● Exposure to DevOps practices , monitoring (Prometheus, Grafana), and infrastructure as code (Terraform, CDK). Note: Previous experience in food tech, logistics, or e-commerce platforms. Ready to Join? Drop your CV at 📧 abdul.r@redblox.io Let’s build amazing things together! 💡💻 #NestJS #NodeJS #TypeScript #BackendDeveloper #DeveloperJobs #TechJobs #Hiring #NowHiring

Posted 6 days ago

Apply

50.0 years

4 - 6 Lacs

Gurgaon

On-site

About the Opportunity Job Type: Permanent Application Deadline: 18 August 2025 Job Description Title Platform Engineer Department Global Platform Solutions Location Gurgaon , India Reports To Associate Director Engineering Level 4 We’re proud to have been helping our clients build better financial futures for over 50 years. How have we achieved this? By working together - and supporting each other - all over the world. So, join our team and feel like you’re part of something bigger. About your team The GPS Delivery - Record Keeping team consists of approximately 200 members responsible for developing and maintaining the systems-of-record used to manage the accounts and investments of our more than 1.5 million workplace and retail customers in the UK. In performing these duties, we play a critical role in delivering our core product and value proposition to these clients both currently and in the future About your role As a Cloud Engineer at Fidelity International, you will work with senior business leaders, product owners, and technology teams to develop or enhance the record-keeping platform. Collaborating with the business and technology architects, you will leverage your cloud engineering experience for design, definition, exploration, and delivery of solutions. Key qualifications include:  Agile environment experience using tools like Jira and Confluence  Knowledge of cloud architecture, networking, DevOps toolchains  Proficiency in Python and Unix scripting. You should be passionate about delivering high-quality, scalable solutions while focusing on customer needs and being open to challenges. You'll influence stakeholders, support team formation, and deliver a greenfield solution, collaborating and sharing knowledge with the global team. About you This role requires a proactive engineer with strong technical background and influence, who can work with development teams on technology architecture, cloud practices, troubleshooting, and implementation. Responsibilities:  Provide technical expertise in design and coding.  Collaborate with product owners to identify improvements and customer requirements.  Ensure timely, efficient, and cost-effective delivery.  Manage stakeholders across Technology and Business teams.  Ensure technical solutions meet functional and non-functional requirements and align with Global Technology Strategies.  Serve as a trusted advisor to the business.  Partner with Architecture, business, and central groups within a global team. The ideal candidate will possess over six years of experience as a software engineer, with expertise in the following areas:  Extensive experience with Kubernetes (K8s) for deploying, managing, and maintaining containerized applications.  In-depth knowledge of AWS services, including EC2, VPC, IAM, Serverless offerings, RDS, Route 53, and CloudFront.  Proficiency in leveraging generative AI for daily tasks, utilizing agents and co-pilots to build, test, and manage applications and code.  Comprehensive understanding of agentic, A2A, MCP, RAG, and AI concepts.  Experience with monitoring and logging tools for Kubernetes clusters.  Strong working knowledge of containerization technologies such as Docker.  Proven experience in managing monolithic record-keeping platforms within DevOps and deployment pipelines.  Understanding of UNIX system architecture.  Solid grasp of networking core concepts.  Advanced knowledge and expertise in serverless architecture and related AWS offerings.  Proficiency in Terraform, including core concepts and hands-on implementation.  Hands-on experience with Unix scripting and Python programming.  Practical experience with CI/CD tools such as Jenkins and Ansible.  Familiarity with container technologies which will be advantageous.  Working knowledge of APIs, caching mechanisms, and messaging systems.  Mastery of at least one programming language or framework, such as Java, NodeJS, or Python.  Expertise in test-driven development (TDD) and pair programming best practices alongside CI/CD pipelines.  Excellent communication skills and a keen interest in a collaborative pair-programming environment.  A strong passion for professional growth and addressing challenging problems. This position demands candidates who are committed to continuous learning and capable of tackling complex issues with innovative solutions. Feel rewarded For starters, we’ll offer you a comprehensive benefits package. We’ll value your wellbeing and support your development. And we’ll be as flexible as we can about where and when you work – finding a balance that works for all of us. It’s all part of our commitment to making you feel motivated by the work you do and happy to be part of our team. For more about our work, our approach to dynamic working and how you could build your future here, visit careers.fidelityinternational.com. For more about our work, our approach to dynamic working and how you could build your future here, visit careers.fidelityinternational.com.

Posted 6 days ago

Apply

1.0 - 2.0 years

3 - 3 Lacs

Panchkula

On-site

Job Summary: As an Associate DevOps Engineer, you will be responsible for setting up and maintaining the infrastructure needed to deploy our projects. This includes managing domains and DNS, deploying MERN stack applications, Python projects, and WordPress sites, and ensuring smooth operation on both Google Cloud and AWS. Key Responsibilities: Configure and manage domains, DNS, and SSL certificates. Set up and deploy MERN stack applications. Deploy and maintain Python-based projects. Manage and deploy WordPress sites. Utilize Google Cloud and AWS for deployment and management of resources. Implement CI/CD pipelines to automate deployments. Monitor and maintain production systems to ensure reliability and performance. Collaborate with development teams to streamline deployment processes. Troubleshoot and resolve infrastructure issues as they arise. Document processes, configurations, and infrastructure setups. Install, configure, and maintain Linux servers and workstations. Manage user accounts, permissions, and access controls. Perform system monitoring, performance tuning, and optimization. Troubleshoot and resolve system and network issues. Apply OS patches, security updates, and system upgrades. Implement and maintain backup and disaster recovery solutions. Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field with 1-2 years of experience. Basic understanding of networking concepts, including DNS, domains, and SSL. Familiarity with MERN stack (MongoDB, Express.js, React.js, Node.js). Basic knowledge of Python and its deployment practices. Understanding of WordPress setup and deployment. Exposure to cloud platforms, particularly Google Cloud and AWS. Basic knowledge of CI/CD pipelines and tools like Jenkins, GitHub Actions, or similar. Strong problem-solving skills and attention to detail. Good communication skills and ability to work collaboratively in a team environment. Preferred Skills: Experience with containerization tools like Docker. Familiarity with Infrastructure as Code (IaC) tools like Terraform or CloudFormation. Understanding of version control systems, especially Git. Basic scripting skills in Shell, Python, or similar languages. Exposure to monitoring tools like Prometheus, Grafana, or similar. Additional Requirements: Proficiency in Docker and containerization. Shell scripting skills. Knowledge of Linux systems and administration. Proficiency in Git and version control. Job Type: Full-time Pay: ₹25,000.00 - ₹30,000.00 per month Benefits: Provident Fund Work Location: In person

Posted 6 days ago

Apply

4.0 years

0 Lacs

Delhi

On-site

About us Bain & Company is a global management consulting that helps the world’s most ambitious change makers define the future. Across 65 offices in 40 countries, we work alongside our clients as one team with a shared ambition to achieve extraordinary results, outperform the competition and redefine industries. Since our founding in 1973, we have measured our success by the success of our clients, and we proudly maintain the highest level of client advocacy in the industry. In 2004, the firm established its presence in the Indian market by opening the Bain Capability Center (BCC) in New Delhi. The BCC is now known as BCN (Bain Capability Network) with its nodes across various geographies. BCN is an integral and largest unit of (ECD) Expert Client Delivery. ECD plays a critical role as it adds value to Bain's case teams globally by supporting them with analytics and research solutioning across all industries, specific domains for corporate cases, client development, private equity diligence or Bain intellectual property. The BCN comprises of Consulting Services, Knowledge Services and Shared Services. Who you will work with Pyxis leverages a broad portfolio of 50+ alternative datasets to provide real-time market intelligence and customer insights through a unique business model that enables us to provide our clients with competitive intelligence unrivaled in the market today. We provide insights and data via custom one-time projects or ongoing subscriptions to data feeds and visualization tools. We also offer custom data and analytics projects to suit our clients’ needs. Pyxis can help teams answer core questions about market dynamics, products, customer behavior, and ad spending on Amazon with a focus on providing our data and insights to clients in the way that best suits their needs. Refer to: www.pyxisbybain.com What you’ll do Setting up tools and required infrastructure Defining and setting development, test, release, update, and support processes for DevOps operation Have the technical skill to review, verify, and validate the software code developed in the project. Troubleshooting techniques and fixing the code bugs Monitoring the processes during the entire lifecycle for its adherence and updating or creating new processes for improvement and minimizing the wastage Encouraging and building automated processes wherever possible Identifying and deploying cybersecurity measures by continuously performing vulnerability assessment and risk management Incidence management and root cause analysis Selecting and deploying appropriate CI/CD tools Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline) Mentoring and guiding the team members Managing periodic reporting on the progress to the management About you A Bachelor’s or Master’s degree in Computer Science or related field 4 + years of software development experience with 3+ years as a devops engineer High proficiency in cloud management (AWS heavily preferred) including Networking, API Gateways, infra deployment automation, and cloud ops Knowledge of Dev Ops/Code/Infra Management Tools: (GitHub, SonarQube, Snyk, AWS X-ray, Docker, Datadog and containerization) Infra automation using Terraform, environment creation and management, containerization using Docker Proficiency with Python Disaster recovery, implementation of high availability apps / infra, business continuity planning What makes us a great place to work We are proud to be consistently recognized as one of the world's best places to work, a champion of diversity and a model of social responsibility. We are currently ranked the #1 consulting firm on Glassdoor’s Best Places to Work list, and we have maintained a spot in the top four on Glassdoor's list for the last 12 years. We believe that diversity, inclusion and collaboration is key to building extraordinary teams. We hire people with exceptional talents, abilities and potential, then create an environment where you can become the best version of yourself and thrive both professionally and personally. We are publicly recognized by external parties such as Fortune, Vault, Mogul, Working Mother, Glassdoor and the Human Rights Campaign for being a great place to work for diversity and inclusion, women, LGBTQ and parents.

Posted 6 days ago

Apply

1.0 - 2.0 years

1 - 2 Lacs

Mohali

On-site

Job Description: Bringle Tech is seeking a talented and motivated Full Stack Developer with 1–2 years of hands-on experience in modern web technologies. The ideal candidate should be passionate about building scalable web applications, writing clean code, and working collaboratively in a fast-paced environment. Technical Stack: Frontend: JavaScript, TypeScript, React.js, Next.js, Tailwind CSS Backend: Node.js, Express.js Databases: MongoDB, PostgreSQL DevOps & Cloud: AWS (EC2, S3, RDS), Docker, GitHub Actions, Terraform, Jenkins, Nginx, Kubernetes (basic) Others: Git, JWT, GraphQL, Prometheus, GitOps, Linux Server Management Key Responsibilities: Design, develop, test, and maintain full-stack applications Build RESTful and GraphQL APIs Develop responsive front-end interfaces using modern frameworks Implement CI/CD pipelines and deployment workflows Collaborate with DevOps teams for cloud-based infrastructure Maintain and optimize existing applications Write clean, scalable, and well-documented code Must-Have Skills: Full Stack Web Development REST & GraphQL APIs CI/CD Pipelines (GitHub Actions/Jenkins) Containerization with Docker Infrastructure as Code (Terraform) AWS Cloud Services DevOps Automation & GitOps Monitoring (Prometheus/Grafana) Git version control Job Type: Full-time Pay: ₹10,000.00 - ₹20,000.00 per month Work Location: In person

Posted 6 days ago

Apply

2.0 years

2 - 8 Lacs

Mohali

On-site

We are seeking a DevOps Engineer with strong experience in CI/CD pipelines, cloud infrastructure, automation, and networking . The ideal candidate will ensure seamless deployment, high system reliability, and secure networking practices. Key Responsibilities: Design, build, and maintain CI/CD pipelines (e.g., Jenkins, GitLab CI) Automate infrastructure provisioning using tools like Terraform, Ansible, etc. Manage and optimize cloud infrastructure (AWS, Azure, GCP) Implement and manage containerized applications using Docker and Kubernetes Monitor system performance, availability, and security Configure and manage internal networks, VPNs, firewalls, and load balancers Troubleshoot networking issues and ensure minimal downtime Maintain network documentation and ensure adherence to security standards Collaborate with developers and QA to support smooth deployments and scalability Implement system monitoring, alerting, and logging (e.g., Prometheus, Grafana, ELK stack) Required Skills and Qualifications: 2–5 years of experience as a DevOps Engineer or similar role Hands-on experience with cloud platforms and infrastructure-as-code tools Strong scripting skills (Bash, Shell, Python, etc.) Solid understanding of computer networking (TCP/IP, DNS, VPN, firewalls) Experience with containerization and orchestration (Docker, Kubernetes) Familiarity with Linux/Unix-based systems Good understanding of network protocols and troubleshooting tools Preferred Qualifications: Bachelor’s degree in Computer Science, Information Technology, or related field Certifications in AWS/Azure/GCP or networking (CCNA, etc.) are a plus Job Type: Full-time Pay: ₹17,776.87 - ₹69,135.46 per month Work Location: In person Speak with the employer +91 9872235857

Posted 6 days ago

Apply

7.0 years

3 - 6 Lacs

Bhopal

On-site

We are seeking a highly skilled and experienced Senior DevOps Engineer with a minimum of 7 years of professional experience, including at least 5 years in designing, implementing, and managing large-scale IT infrastructures on AWS and/or Azure . The ideal candidate must have strong hands-on expertise in Docker and Kubernetes , along with cloud-native architectures, automation, CI/CD, monitoring, and DevSecOps practices. Key Responsibilities: Design, implement, and manage scalable and secure cloud infrastructure using AWS and/or Azure services. Build and manage CI/CD pipelines using tools such as Jenkins, GitLab CI, GitHub Actions, or Azure DevOps. Create and maintain Infrastructure as Code using tools like Terraform, CloudFormation, or ARM templates. Lead cloud architecture planning , capacity management, and disaster recovery implementations. Build, deploy, and orchestrate containers using Docker and Kubernetes (EKS/AKS) . Implement and manage observability stacks using Prometheus, Grafana, ELK Stack, CloudWatch, Azure Monitor , etc. Ensure cloud security and governance policies are implemented and followed. Optimize infrastructure performance and cost on cloud environments. Mentor team members and promote DevOps best practices across the organization. Troubleshoot infrastructure, application, and network issues in production and development environments. Mandatory Skills & Qualifications: B.E./B.Tech/MCA degree from a recognized institution. Minimum 7 years of overall experience in DevOps, Infrastructure, or Cloud Engineering. At least 5 years of hands-on experience with AWS and/or Azure . Strong proficiency in Docker for containerization. In-depth experience with Kubernetes for container orchestration (AKS/EKS preferred). Expertise in Infrastructure as Code (Terraform, CloudFormation, ARM). Hands-on experience with CI/CD pipelines and tools like Jenkins, GitLab CI/CD, Azure DevOps. Proficient in scripting languages such as Bash, Python, or PowerShell . Strong knowledge of Linux systems administration and networking fundamentals. Solid understanding of Git and source control workflows. Familiarity with security standards and cloud compliance frameworks is a plus. Excellent analytical and troubleshooting skills. Preferred Certifications (optional): AWS Certified DevOps Engineer / Solutions Architect Microsoft Certified: Azure DevOps Engineer / Solutions Architect Certified Kubernetes Administrator (CKA) Contact : 7418252567 , 8778852267, 7904349866 Job Type: Full-time Pay: ₹25,000.00 - ₹50,000.00 per month Work Location: In person Speak with the employer +91 7845416995

Posted 6 days ago

Apply

0 years

0 Lacs

India

Remote

Role: NIFI Developer Notice period: Notice Serving Candidates or Immediate Joiners Preferred Client: Marriott Payroll: Dminds Work Mode: Remote I nterview Mode: Virtual We’re looking for someone who has built deployed and maintained NIFI clusters. Roles & Responsibilities: ·Implemented solutions utilizing Advanced AWS Components: EMR, EC2, etc integrated with Big Data/Hadoop Distribution Frameworks: Zookeeper, Yarn, Spark, Scala, NiFi etc. ·Designed and Implemented Spark Jobs to be deployed and run on existing Active clusters. ·Configured Postgres Database on EC2 instances and made sure application that was created is up and running, Trouble Shooted issues to meet the desired application state. ·Experience in creating and configuring secure VPC, Subnets, and Security Groups through private and public networks. ·Created alarms, alerts, notifications for Spark Jobs to email and slack group message job status and log in CloudWatch. ·NiFi data Pipeline to process large set of data and configured Lookup’s for Data Validation and Integrity. ·generation large set of test data with data integrity using java which used in Development and QA Phase. ·Spark Scala, improving the performance and optimized of the existing applications running on EMR cluster. ·Spark Job to Convert CSV data to Custom HL7/FHIR objects using FHIR API’s. ·Deployed SNS, SQS, Lambda function, IAM Roles, Custom Policies, EMR with Spark and Hadoop setup and bootstrap scripts to setup additional software’s needed to perform the job in QA and Production Environment using Terraform Scripts. ·Spark Job to perform Change Data Capture (CDC) on Postgres Tables and updated target tables using JDBC properties. ·Kafka Publisher integrated in spark job to capture errors from Spark Application and push into Postgres table. ·extensively on building Nifi data pipelines in docker container environment in development phase. ·Devops team to Clusterize NIFI Pipeline on EC2 nodes integrated with Spark, Kafka, Postgres running on other instances using SSL handshakes in QA and Production Environments.

Posted 6 days ago

Apply

3.0 years

0 Lacs

India

Remote

Thinkgrid Labs is at the forefront of innovation in custom software development. Our expert team of software engineers, architects, and UI/UX designers specialises in crafting bespoke web, mobile, and cloud applications, along with AI solutions and intelligent bots. Serving a diverse range of industries, we have a global client base across five continents. Our commitment to quality and passion for technological advancement drive us to push boundaries and set new standards. We're expanding our team with smart and creative individuals who are passionate about building high-performance, user-friendly, flexible, and maintainable software. We are hiring a Health Information Exchange (HIE) Software Engineer to work on projects for clients outside of India, so excellent oral and written communication skills are a must. Job Title : Health Information Exchange (HIE) Software Engineer Location : Remote Working Hours : 3 PM IST to 12 AM IST Experience Required : Minimum 3 years Education : Bachelor’s or Master’s degree in Computer Science or Health Informatics Who you are: HIE Standards Specialist: Deep, practical knowledge of IHE profiles and ITI transactions—PIX/PDQ, XDS.b, XCA, XCDR/XCT, XCPD, XDW—and familiarity with HL7 v2/v3, CDA, and FHIR. Integration Engineer: Proven experience building and securing SOAP and RESTful services, handling message transformation (Mirth Connect, Iguana, Apache Camel, or similar), and integrating with EMR/EHR systems. Master Patient Index (MPI) Pro: Hands-on experience implementing or integrating enterprise/clinical MPIs, probabilistic or deterministic matching algorithms, and patient de-duplication strategies. Cloud-Native Developer: Proficient in one or more modern stacks—Java/Spring Boot, .NET Core, Node.js/TypeScript, or Python/FastAPI—with microservices architecture, containerisation (Docker, Kubernetes), and deployments on AWS / Azure / GCP. Security & Compliance Aficionado: Working knowledge of HIPAA, CMS, ONC Certification criteria, TEFCA, OAuth 2.0/OIDC, and TLS/MTLS for secure data exchange. Quality Champion: Comfortable with IHE Gazelle, NIST XDS tools, Touchstone, or similar test harnesses to validate conformance and performance. Problem Solver & Team Player: Thrive in an agile, distributed, cross-functional environment; able to communicate complex technical ideas clearly to non-technical stakeholders. Passionate & Humble: Enthusiastic about improving healthcare data exchange and willing to learn continuously while empowering teammates. What you will be doing: Design & Architecture: Define HIE solution architectures, data models, and APIs that implement IHE ITI profiles (PIX/PDQ, XDS.b, XCA, XCPD, XCDR, etc.)—including security, scalability, and high availability considerations. Development & Integration: Build and maintain services, adapters, and orchestration workflows to ingest, store, query, and retrieve clinical documents and images across disparate systems. Implement enterprise or federated MPI services with robust patient-matching logic and reconciliation workflows. Standards Conformance & Validation: Configure and execute automated test suites using Gazelle EVS Client, NIST validators, Inferno, or custom Postman collections to ensure full IHE/HL7 compliance. Performance Optimisation & Monitoring: Profile message throughput, tweak database indexes (SQL/NoSQL), and fine-tune document repository/registry performance; set up dashboards (Prometheus/Grafana, CloudWatch, or Azure Monitor). DevOps & CI/CD: Automate build, test, and deployment pipelines (GitHub Actions, Azure DevOps, Jenkins, or GitLab CI) and manage infrastructure as code (Terraform, CloudFormation). Security & Compliance: Enforce role-based access controls, audit logging, encryption in transit/at rest, and risk mitigation strategies aligned with HIPAA and ISO 27001 standards. Documentation & Knowledge Sharing: Produce technical design docs, sequence diagrams, data-flow diagrams, and API specs; guide junior engineers and collaborate closely with QA, analysts, and customer teams. Continuous Improvement: Stay current with evolving IHE profiles (e.g., Mobile Health Document Sharing), FHIR-based exchange initiatives, and industry best practices; recommend enhancements to keep our HIE offerings cutting-edge. Benefits 5 day work week (unless for rare emergencies) 100 % remote setup with flexible work culture and international exposure Opportunity to work on mission-critical healthcare projects impacting providers and patients globally

Posted 6 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies