Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
9.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description Tech Lead – Azure/Snowflake & AWS Migration Key Responsibilities Design and develop scalable data pipelines using Snowflake as the primary data platform, integrating with tools like Azure Data Factory, Synapse Analytics, and AWS services. Build robust, efficient SQL and Python-based data transformations for cleansing, enrichment, and integration of large-scale datasets. Lead migration initiatives from AWS-based data platforms to a Snowflake-centered architecture, including: Rebuilding AWS Glue pipelines in Azure Data Factory or using Snowflake-native ELT approaches. Migrating EMR Spark jobs to Snowflake SQL or Python-based pipelines. Migrating Redshift workloads to Snowflake with schema conversion and performance optimization. Transitioning S3-based data lakes (Hudi, Hive) to Snowflake external tables via ADLS Gen2 or Azure Blob Storage. Redirecting Kinesis/MSK streaming data to Azure Event Hubs, followed by ingestion into Snowflake using Streams & Tasks or Snowpipe. Support database migrations from AWS RDS (Aurora PostgreSQL, MySQL, Oracle) to Snowflake, focusing on schema translation, compatibility handling, and data movement at scale. Design modern Snowflake lakehouse-style architectures that incorporate raw, staging, and curated zones, with support for time travel, cloning, zero-copy restore, and data sharing. Integrate Azure Functions or Logic Apps with Snowflake for orchestration and trigger-based automation. Implement security best practices, including Azure Key Vault integration and Snowflake role-based access control, data masking, and network policies. Optimize Snowflake performance and costs using clustering, multi-cluster warehouses, materialized views, and result caching. Support CI/CD processes for Snowflake pipelines using Git, Azure DevOps or GitHub Actions, and SQL code versioning. Maintain well-documented data engineering workflows, architecture diagrams, and technical documentation to support collaboration and long-term platform maintainability. Required Qualifications 9+ years of data engineering experience, with 3+ years on Microsoft Azure stack and hands-on Snowflake expertise. Proficiency in: Python for scripting and ETL orchestration SQL for complex data transformation and performance tuning in Snowflake Azure Data Factory and Synapse Analytics (SQL Pools) Experience in migrating workloads from AWS to Azure/Snowflake, including services such as Glue, EMR, Redshift, Lambda, Kinesis, S3, and MSK. Strong understanding of cloud architecture and hybrid data environments across AWS and Azure. Hands-on experience with database migration, schema conversion, and tuning in PostgreSQL, MySQL, and Oracle RDS. Familiarity with Azure Event Hubs, Logic Apps, and Key Vault. Working knowledge of CI/CD, version control (Git), and DevOps principles applied to data engineering workloads. Preferred Qualifications Extensive experience with Snowflake Streams, Tasks, Snowpipe, external tables, and data sharing. Exposure to MSK-to-Event Hubs migration and streaming data integration into Snowflake. Familiarity with Terraform or ARM templates for Infrastructure-as-Code (IaC) in Azure environments. Certification such as SnowPro Core, Azure Data Engineer Associate, or equivalent. Senior Data Engineer – Azure/Snowflake Migration Key Responsibilities Design and develop scalable data pipelines using Snowflake as the primary data platform, integrating with tools like Azure Data Factory, Synapse Analytics, and AWS services. Build robust, efficient SQL and Python-based data transformations for cleansing, enrichment, and integration of large-scale datasets. Lead migration initiatives from AWS-based data platforms to a Snowflake-centered architecture, including: Rebuilding AWS Glue pipelines in Azure Data Factory or using Snowflake-native ELT approaches. Migrating EMR Spark jobs to Snowflake SQL or Python-based pipelines. Migrating Redshift workloads to Snowflake with schema conversion and performance optimization. Transitioning S3-based data lakes (Hudi, Hive) to Snowflake external tables via ADLS Gen2 or Azure Blob Storage. Redirecting Kinesis/MSK streaming data to Azure Event Hubs, followed by ingestion into Snowflake using Streams & Tasks or Snowpipe. Support database migrations from AWS RDS (Aurora PostgreSQL, MySQL, Oracle) to Snowflake, focusing on schema translation, compatibility handling, and data movement at scale. Design modern Snowflake lakehouse-style architectures that incorporate raw, staging, and curated zones, with support for time travel, cloning, zero-copy restore, and data sharing. Integrate Azure Functions or Logic Apps with Snowflake for orchestration and trigger-based automation. Implement security best practices, including Azure Key Vault integration and Snowflake role-based access control, data masking, and network policies. Optimize Snowflake performance and costs using clustering, multi-cluster warehouses, materialized views, and result caching. Support CI/CD processes for Snowflake pipelines using Git, Azure DevOps or GitHub Actions, and SQL code versioning. Maintain well-documented data engineering workflows, architecture diagrams, and technical documentation to support collaboration and long-term platform maintainability. Required Qualifications 7+ years of data engineering experience, with 3+ years on Microsoft Azure stack and hands-on Snowflake expertise. Proficiency in: Python for scripting and ETL orchestration SQL for complex data transformation and performance tuning in Snowflake Azure Data Factory and Synapse Analytics (SQL Pools) Experience in migrating workloads from AWS to Azure/Snowflake, including services such as Glue, EMR, Redshift, Lambda, Kinesis, S3, and MSK. Strong understanding of cloud architecture and hybrid data environments across AWS and Azure. Hands-on experience with database migration, schema conversion, and tuning in PostgreSQL, MySQL, and Oracle RDS. Familiarity with Azure Event Hubs, Logic Apps, and Key Vault. Working knowledge of CI/CD, version control (Git), and DevOps principles applied to data engineering workloads. Preferred Qualifications Extensive experience with Snowflake Streams, Tasks, Snowpipe, external tables, and data sharing. Exposure to MSK-to-Event Hubs migration and streaming data integration into Snowflake. Familiarity with Terraform or ARM templates for Infrastructure-as-Code (IaC) in Azure environments. Certification such as SnowPro Core, Azure Data Engineer Associate, or equivalent. Skills Aws,Azure Data Lake,Python
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
You will be joining Kodo, a company dedicated to simplifying the CFO stack for fast-growing businesses through a single platform that streamlines all purchase operations. Trusted by renowned companies like Cars24, Mensa Brands, and Zetwerk, Kodo empowers teams with flexible corporate processes and real-time insights integrated with ERPs. With $14M raised from investors like Y Combinator and Brex, Kodo is on a mission to provide exceptional products, a nurturing environment for its team, and profitable growth. As a Dev Ops Engineer at Kodo, your primary responsibility will be to contribute to building and maintaining a secure and scalable fintech platform. You will collaborate with the engineering team to implement security best practices throughout the software development lifecycle. This role requires hands-on experience with various tools and technologies such as Git, Linux, CI/CD tools (Jenkins, Github Actions), infra as code tools (Terraform, Cloudformation), scripting/programming languages (bash, Python, Node.js, Golang), Docker, Kubernetes, microservices paradigms, L4 and L7 load balancers, SQL/NoSQL databases, Azure cloud, and architecting 3-tier applications. Your key responsibilities will include implementing and enhancing logging, monitoring, and alerting systems, building and maintaining highly available production systems, optimizing applications for speed and scalability, collaborating with team members and stakeholders, and demonstrating a passion for innovation and product excellence. Experience with fintech security, CI/CD pipelines, cloud security tools like CloudTrail and CloudGuard, and security automation tools such as SOAR will be considered a bonus. To apply for this full-time position, please send your resume and cover letter to jobs@kodo.in. Besides a competitive salary and benefits package, we are looking for a proactive, security-conscious, problem-solving Dev Ops Engineer who can communicate complex technical concepts effectively, work efficiently under pressure, and demonstrate expertise in threat modeling, risk management, and vulnerability assessment and remediation.,
Posted 1 week ago
3.0 - 8.0 years
0 Lacs
hyderabad, telangana
On-site
You should have a minimum of 6 years of experience and be located in Hyderabad, working in Malaysia Shift. As a Cloud Engineer, you will be responsible for hands-on experience in Azure or Google Cloud, K8s infra and service management, Terraform and Iaac, Scripting, Application Integration, OS, Middleware, DB level knowledge, and troubleshooting skills. You should also have experience with CI/CD tools such as Jenkins and Bitbucket, as well as automation tools like Ansible. Your responsibilities will include designing, developing, and maintaining cloud-based systems using Azure or Google Cloud, writing scripts to automate tasks and processes, integrating applications end-to-end with a focus on configuration, troubleshooting issues end to end including OS, working with CI/CD tools such as Jenkins and Bitbucket to deploy code changes, and using automation tools like Ansible to streamline processes. Additionally, you should have good knowledge in networking and connectivity setup. Requirements for this role include 3+ years of hands-on experience in Azure including all PaaS service knowledge, 3+ years of hands-on experience in Google Cloud, 4+ years of hands-on experience in K8s infra and service management, 3+ years of expert knowledge in Kubernetes and Ansible, expert scripting knowledge, good knowledge of application integration including end-to-end configuration, overall experience of 8 years with strong OS level knowledge and troubleshooting skill, hands-on experience in CI/CD tools such as Jenkins and Bitbucket, hands-on experience in automation tools like Ansible, and a self-motivated attitude ready to support operation role.,
Posted 1 week ago
5.0 - 7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Hi, Terraform Good in AWS and Azure Writing ETL pipeline Strong in Python Good understand Infrastructure automation Any Data Services Knowledge of DevOps knowledge of CICD pipelines 5 to 7 years’ experience Relevant 4 to 5 Terraform and python UK hours - shift timing Please share resume at deepika.eaga@quesscorp.com
Posted 1 week ago
14.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Responsibilities Agile Framework Proficiency Understand and apply Agile principles, such as iterative development, continuous delivery, and self-organizing teams. Manage the team's automation work within the Agile development cycle, ensuring it aligns with sprint goals and product backlog items. Put the learnings (delays, defects) into the process using retrospective meetings Automation Expertise: End to end accountability from requirements gathering, development, testing and deployment of automation projects and making sure timely delivery of projects Design and implement automation frameworks that are scalable, maintainable, and efficient. Lead the development and maintenance of automated test suites and tools, covering various levels of testing (unit, integration, UI). Ensure coding standards and best practices are followed and review code developed by team members. Stay up to date with the latest automation tools, technologies, and best practices. Team Management and Leadership: Lead, mentor, and motivate the automation team to achieve their goals and develop their skills. Establish clear communication and collaboration channels within the team and with other Agile teams. Manage the team's workload, ensuring resources are effectively utilized and deadlines are met. Collaboration and Communication: Collaborate with product owners, scrum masters, and other development team members to ensure the automation efforts align with project goals. Actively participate in Agile ceremonies (e.g., sprint planning, daily stand-ups, sprint reviews). Effectively communicate the status of automation tasks and projects to stakeholders. Problem-Solving and Decision-Making: Identify and resolve technical issues and impediments that may hinder the team's progress. Make informed decisions about the automation strategy and tools that best support the Agile development process. Promote continuous improvement and learning within the team. Experience Overall 14+ years of experience with 10+ of relevant experience Experience in leading a team of automation developers. Expertise in designing, architecting and developing automations using like Ansible, Terraform, Python. Experience in Linux, Windows and Network for automation development and troubleshooting. Expertise in writing code with any programming/scripting language like Python and shell to automate tasks. Good communication skills with experience in reporting and presentation to leaders. Familiarity with CI/CI tool like Jenkins and source control management tool like github, bitbucket etc.
Posted 1 week ago
7.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Title: Cloud Architect with DevOps Location: Noida (Hybrid) Job Type: Full-Time | Permanent Experience: 7+ years Job Summary: We are seeking an experienced Cloud Architect with strong DevOps expertise to lead and support our cloud transformation journey. The ideal candidate will be responsible for designing scalable and secure cloud architectures, driving cloud migration from traditional ETL tools (e.g., IBM DataStage) to modern cloud-native solutions, and enabling DevOps automation and best practices. The candidate must also have strong hands-on experience with Spark and Snowflake , along with a strong background in optimizing cloud performance and addressing cloud security vulnerabilities. Key Responsibilities: Design and implement scalable, secure, and high-performance cloud architectures in AWS, Azure, or GCP. Lead the cloud migration of ETL workloads from IBM DataStage to cloud-native or Spark-based pipelines. Architect and maintain Snowflake data warehouse solutions , ensuring high performance and cost optimization. Implement DevOps best practices , including CI/CD pipelines, infrastructure as code (IaC), monitoring, and logging. Drive automation and operational efficiency across build, deployment, and environment provisioning processes. Proactively identify and remediate cloud security vulnerabilities , ensuring compliance with industry best practices. Collaborate with cross-functional teams including data engineers, application developers, and cybersecurity teams. Provide architectural guidance on Spark-based big data processing pipelines in cloud environments. Support troubleshooting, performance tuning, and optimization across platforms and tools. Required Qualifications: 7+ years of experience in cloud architecture, DevOps engineering, and data platform modernization. Strong expertise in AWS, Azure, or GCP cloud platforms. Proficient in Apache Spark for large-scale data processing. Hands-on experience with Snowflake architecture, performance tuning, and data governance. Deep knowledge of DevOps tools : Terraform, Jenkins, Git, Docker, Kubernetes, Ansible, etc. Experience with cloud migration , especially from legacy ETL tools like IBM DataStage. Strong scripting and automation skills in Python, Bash, or PowerShell . Good understanding of networking, cloud security , IAM, VPCs, and compliance standards. Experience implementing CI/CD pipelines , observability, and incident response in cloud environments. Preferred Qualifications: Certification in one or more cloud platforms (e.g., AWS Solutions Architect, Azure Architect). Experience with data lake and lakehouse architectures . Familiarity with modern data orchestration tools like Airflow, DBT, or Glue. Working knowledge of Agile methodologies and DevOps culture. Familiarity with cost management and optimization in cloud deployments.
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Site Reliability Engineer at Rapid Circle, you will have the opportunity to make a significant impact and drive positive change through your work. Our Cloud Pioneers are dedicated to assisting clients in their digital transformation journey, and if you are someone who thrives on constant improvement and positive changes, then this role is perfect for you. In this position, you will collaborate with customers on various projects across different sectors, such as healthcare, manufacturing, and energy. Your contributions may involve ensuring the secure availability of research data in the healthcare industry or working on challenging projects in the manufacturing and energy markets. At Rapid Circle, we foster a culture of curiosity and continuous improvement. We are dedicated to enhancing our expertise to assist customers in navigating a rapidly evolving landscape. Through knowledge sharing and exploration of new learning avenues, we aim to stay at the forefront of technological advancements. As our company experiences rapid growth, we are seeking the right individual to join our team. In this role, you will have the autonomy to pursue personal development opportunities. With a wealth of in-house expertise (MVPs) spanning across the Netherlands, Australia, and India, you will have the chance to collaborate closely with international colleagues. This collaborative environment will enable you to challenge yourself, carve out your growth path, and embrace freedom, entrepreneurship, and continuous development key values at Rapid Circle. Your responsibilities as a Site Reliability Engineer will include: - Collaborating with development partners to shape system architecture, design, and implementations for improved reliability, performance, efficiency, and scalability. - Monitoring and measuring key services, raising alerts as necessary. - Automating deployment and configuration processes. - Developing reliability tools and frameworks for use by all engineers. - Participating in On-Call duties for critical systems, leading incident response, and conducting post-mortem analysis and reviews. Desired Skills: - Good understanding of IAM (Identity and Access Management) in the cloud. - Familiarity with the DevOps and SAFe/Scrum methodologies. Certifications: - Must have: Active Kubernetes certification. - Good to have: Cloud Certification. Key Skills: - Expertise or deep working knowledge in Kubernetes and Image Building for Cloud Platform (AWS & Azure) security services. - Proficiency in Hashicorp products such as Vault and Terraform. - Expertise in coding infrastructure, automation, and orchestration. - Working knowledge of Kubernetes, Terraform, Prometheus, Elastic, Jenkins, or similar tools. - Proficient in multiple cloud platforms, including AWS and Azure.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
noida, uttar pradesh
On-site
Genpact is a global professional services and solutions firm with over 125,000 employees in more than 30 countries. Our team is characterized by curiosity, agility, and a commitment to creating lasting value for our clients. We are driven by the relentless pursuit of a world that works better for people. Our expertise lies in serving and transforming leading enterprises, including Fortune Global 500 companies, through deep business and industry knowledge, digital operations services, and proficiency in data, technology, and AI. We are currently seeking applications for the position of Senior Principal Consultant- DevSec Ops Sr Engineer. As a Senior Principal Consultant, you will be responsible for leading the quality assurance department within any industry. Your role involves ensuring that the team of QA engineers progresses smoothly through the project, resolving conflicts, reviewing schedules and plans, mitigating risks, checking quality at various phases, updating management, and fostering a challenging and motivating environment. **Responsibilities:** - Architect and design solutions that align with functional and non-functional requirements. - Create and review architecture and solution design artifacts. - Provide technical guidance and lead a group of architects towards a common goal. - Ensure compliance with architectural standards, product-specific guidelines, and usability design standards. - Analyze and operate at different levels of abstraction. - Assess and incorporate suitable sizing of solutions, technology fit, and disaster recovery considerations. **Qualifications:** **Minimum Qualifications / Skills:** - Bachelor's degree in computer science, engineering, or a related field. - Azure certification with multiple years of Azure architecture experience. - Experience in setting up Azure Landing Zone and foundation components in a greenfield environment. - Multiple years of experience in Azure DevOps, including creating and managing ADO Yaml pipelines. - Proficiency in using ADO Yaml pipelines for infrastructure creation with Terraform and deploying applications across Azure subscriptions. **Preferred Qualifications / Skills:** - Ability to work hands-on individually to demonstrate and deliver projects. - Effective communication skills with customers and stakeholders. This position is based in India-Noida and is a full-time role. The ideal candidate is expected to have a bachelor's degree or equivalent qualification. If you are passionate about consulting and have the required skills and experience, we invite you to apply for this exciting opportunity.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Cloud Environment Engineer, your responsibilities will include designing, developing, implementing, operating, improving, and debugging cloud environments in Google Cloud Platform (GCP) and Cloud Management Platform, as well as orchestration tools. You will be involved in engineering design evaluations for new environment builds and architecting, implementing, and enhancing possible automations for cloud environments. Your role will also entail recommending alterations to development and design to enhance the quality of products and procedures. It will be your responsibility to implement industry-standard security practices during implementation and maintain them throughout the lifecycle. Additionally, you will focus on process analysis and design to identify technology-driven improvements in core enterprise processes. Keeping abreast of the market for cloud services, enterprise applications, business process management, advanced analytics, and integration software will be crucial. You will play a significant role in estimating work requirements to provide accurate information on project estimations to Technology Leads and Project Managers. Your contributions will be essential in building efficient programs and systems to assist clients in their digital transformation journey. In terms of technical requirements, you should have expertise in Cloud technologies such as Google Cloud Practice, GCP Administration, DevOps, Terraform, and Kubernetes. Understanding container technology and container orchestration platforms like Docker and Kubernetes will be an added advantage. Moreover, you are expected to have knowledge of best practices and market trends related to Cloud and the overall industry. Providing thought leadership through seminars, whitepapers, and mentoring the team to enhance competency will be part of your additional responsibilities. You will also be required to advise customer executives on their cloud strategy, roadmap improvements, alignment, and further enhancements. If you are passionate about leveraging your skills to help clients navigate their digital transformation journey effectively, this role is the perfect fit for you.,
Posted 1 week ago
8.0 years
0 Lacs
India
Remote
Voice AI Scheduling (Scale‑Ready, Multi‑Tenant) — Remote Company: Apex Dental Systems Location: Remote (must overlap 7+ hours with 8am–5pm Pacific / America‑Vancouver) Type: Full Time Engineer with ramp up into CTO Compensation: Engineer: $2000USD/month + 1% equity Upon promotion to CTO: $4000USD/month + 2% equity About Us Apex Dental Systems builds voice AI reception for dental/orthodontic clinics. We connect real phone calls (Retell AI + telephony) to booked appointments via NexHealth and, over time, direct PMS connectors. We’re moving from pilot to scale across 50–100+ clinics with high reliability and tight cost control. The Mission Own the scale‑ready backend platform : multi‑tenant onboarding automation, secure configuration management, rate‑limit and retries, SLO‑backed reliability, cost observability, and compliance (HIPAA/PIPEDA). Your work allows us to onboard dozens of clinics per week with minutes, not days , of setup. Outcomes You’ll Deliver in the First 4–6 Weeks Multi‑tenant architecture with tenant isolation, role‑based access (RBAC), and per‑clinic secrets (env‑less runtime or AWS Secrets Manager). Onboarding automation that reduces per‑clinic setup to ≤60 minutes : provider/location/appointment‑type sync, ID mapping, test calls, and health checks. Hardened tool endpoints used by the voice agent (Retell function calling): availability_search, appointment_book, appointment_reschedule, appointment_cancel, patient_find_or_create, note_create, warm_transfer. Reliability controls : idempotency keys, timeouts, retries with backoff, circuit breakers; graceful fallbacks + warm transfer. Observability & SLOs : structured logs, metrics, tracing; dashboards for p50/p95 latency , error rates, booking success %, transfers, cost per minute/call; alerts to Slack. Security & compliance : PHI minimization, at‑rest and in‑transit encryption, access logging, data‑retention policy, BAA‑aware configuration. Cost guardrails : per‑tenant budget meters for voice minutes/LLM/TTS usage and anomaly alerts. KPIs you’ll move: Median tool‑call latency < 800 ms (p95 < 1500 ms) ≥ 80% booking/reschedule success without human handoff (eligible calls) 99.9%+ middleware availability < 1% tool‑level error rate (after retries) ≤ 60 min time‑to‑onboard a new clinic (target 30 min by week 6) Responsibilities Design, implement, and document multi‑tenant REST/JSON services consumed by the voice agent. Integrate NexHealth now; design extension points for direct PMS (OpenDental/Dentrix/Eaglesoft/Dolphin) later. Build sync jobs to keep providers/locations/appointment types up‑to‑date (with caching via Redis, invalidation, and backfills). Implement idempotent booking flows with conflict detection and safe retries; log every state transition. Stand up observability (metrics/logs/traces) and alerting; define SLOs/SLA and on‑call basics. Ship CI/CD with linting, tests (unit, contract, integration), and minimal load tests. Enforce secrets management , least‑privilege IAM, and a clean audit trail . Partner with our conversation designer to refine tool schemas and edge‑case flows (insurance screening, multi‑location routing). Mentor a mid‑level engineer and coordinate with ops for smooth rollouts. Minimum Qualifications 5–8+ years building production backend systems (you’ve owned a system in prod). Expert in Node.js (TypeScript) or Python (FastAPI/Nest/Express). Deep experience with external API integrations (auth, pagination, rate limits, webhooks). Postgres (schema design, migrations) and Redis (caching, locks). Production reliability patterns: retries/backoff, timeouts, idempotency , circuit breakers. Observability: metrics, tracing, log correlation; incident triage. Security/compliance mindset; comfortable handling sensitive data flows. Strong written English; crisp architectural docs and PRs. Nice‑to‑Have Retell AI (or similar voice/LLM with function calling and barge‑in), Twilio/SIP . NexHealth or other healthcare scheduling APIs; PMS/EHR familiarity. HIPAA/PIPEDA exposure, SOC 2‑style controls. OpenTelemetry, Prometheus/Grafana, Sentry; AWS/GCP; Terraform; Docker/Kubernetes. High‑volume, low‑latency systems experience. Our Stack (target) Runtime: Node.js (TypeScript) or Python (FastAPI) Data: Postgres, Redis Infra: AWS (ECS/EKS or Fargate), Terraform, GitHub Actions Integrations: Retell AI (voice), NexHealth (scheduling), Twilio/SIP (telephony) Observability: OpenTelemetry + Prometheus/Grafana or Cloud provider equivalents How We Work Remote‑first; async‑friendly; 4+ hours overlap with Pacific time. Code in company repos, NDAs/PIAs/BAAs , DCO/CLA, and strict access hygiene. We optimize for reliability and patient privacy over quick hacks. Interview Process (fast, 7–10 days) Intro (20–30 min): Your background, past scale/reliability wins. Take‑home (90 min, paid for finalists): Implement availability_search + appointment_book against a stubbed NexHealth‑like API. Include idempotency keys, retries with backoff, timeouts, and basic tests. Provide a short runbook and a dashboard sketch for p95 latency & error‑rate alerts. Deep‑dive (60 min): Review your code; discuss multi‑tenant design, secrets, SLOs, and cost control. Final (30–45 min): Collaboration & comms. How to Apply Email info@apexdentalsystems.com with subject “Senior Backend — Scale‑Ready Voice AI” and include: CV + GitHub/portfolio 5–10 lines on a system you made multi‑tenant (what changed?) A time you prevented double bookings or handled idempotency at scale Your preferred stack (Node+TS or Python), availability, and comp expectations
Posted 1 week ago
0 years
0 Lacs
India
On-site
You might be a fit if you have ● 5 + yrs production ML / data-platform engineering (Python or Go/Kotlin). ● Deployed agentic or multi-agent systems (e.g., micro-policy nets, bandit ensembles) and reinforcement-learning pipelines at scal (ad budget, recommender, or game AI). ● Fluency with BigQuery / Snowflake SQL & ML plus streaming (Kafka / Pub/Sub). ● Hands-on LLM fine-tuning using LoRA/QLoRA and proven prompt-engineering skills (system / assist hierarchies, few-shot, prompt compression). ● Comfort running GPU & CPU model serving on GCP (Vertex AI, GKE, or bare-metal K8s). ● Solid causal-inference experience (CUPED, diff-in-diff, synthetic control, uplift). ● CI/CD, IaC (Terraform or Pulumi) & observability chops (Prometheus, Grafana). ● Bias toward shipping working software over polishing research papers. Bonus points for: ● Postal/geo datasets, ad-tech, or martech domain exposure. ● Packaging RL models as secure micro-services. ● VPC-SC, NIST, or SOC-2 controls in a regulated data environment. ● Green-field impact – architect the learning stack from scratch. ● Moat-worthy data – 260 M+ US consumer graph tying offline & online behavior. ● Tight feedback loops – your models go live in weeks, optimizing large amounts of marketing spend daily.
Posted 1 week ago
15.0 - 19.0 years
0 Lacs
chennai, tamil nadu
On-site
As the Senior Manager of Platform Automation and Site Reliability Engineering (SRE) at MX Technology, Inc., you will play a pivotal role in leading and scaling our platform automation and SRE initiatives. Your responsibilities will involve overseeing the development and management of our platforms and infrastructure to ensure reliability, scalability, and performance while fostering a culture of continuous improvement and operational excellence. Your leadership and strategic vision will guide a team of platform engineers, SREs, and automation experts, providing mentorship, guidance, and career development. You will collaborate with cross-functional teams to drive the adoption of DevOps best practices and align them with business objectives. In terms of platform automation, you will design, implement, and manage automated solutions for infrastructure provisioning, configuration management, and monitoring. Your role will also involve leading efforts to automate manual processes, reduce technical debt, and optimize Kubernetes clusters for high availability and resource efficiency. For Site Reliability Engineering (SRE), you will establish and maintain best practices focusing on reliability, performance, and scalability of production systems. This includes defining and monitoring SLOs/SLAs, resolving performance bottlenecks, and leading incident response processes. Additionally, you will enhance and maintain CI/CD pipelines for rapid and reliable software delivery, evaluate and implement tools to improve developer workflows, and collaborate with development teams to ensure seamless integration of tooling and automation. Your qualifications should include a Bachelor's degree in Computer Science, Engineering, or a related field, with 15+ years of experience in platform engineering, DevOps, or SRE roles. Strong expertise in platform automation, Kubernetes, CI/CD, and cloud-native technologies is essential, along with a proven track record of building and managing high-performing teams in a fast-paced environment. To excel in this role, you should possess a deep understanding of DevOps principles, proficiency in scripting and programming languages, expertise in Kubernetes, Docker, Terraform, Ansible, Jenkins, and strong problem-solving skills. Excellent communication and collaboration skills are crucial, as well as the ability to work effectively across teams and departments. Preferred qualifications include certifications in Kubernetes, AWS, or other relevant technologies, experience with observability tools, knowledge of security best practices in DevOps, and familiarity with cloud-native architectures and microservices. At MX, we value candidates who drive results and achieve successful outcomes. Our hybrid work arrangement allows for a mix of local and remote work, with remote team members traveling to the office four times a year at MX's expense. Our office space offers various amenities, including onsite perks, company-paid meals, massage therapists, sports simulator, gym, mother's lounge, and meditation room, for both local and remote employees.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
bhubaneswar
On-site
You will be responsible for deploying new Linux (RHEL/CentOS) systems, providing overall support and automation of the Linux server platform, and related Linux-based services. This includes troubleshooting issues, defining disaster recovery plans, and establishing procedures and documentation. Additionally, you will build and maintain Red Hat infrastructure, as well as automation around Linux host configuration and software package creation and deployment for various applications. As part of your role, you will develop automation scripts for systems administration, deployment, and configuration of Linux servers, developer desktops, and automate operational tasks. You will act as a senior resource for the configuration and lifecycle of the Linux workstation and server OS environments, including automation and customization. Furthermore, you will be a key contributor in a DevOps-oriented team to facilitate the provisioning of custom application servers and the continuous lifecycle of Linux server-based applications running in a hybrid cloud infrastructure. You will also be responsible for designing, implementing, and automating build, release, deploy, monitoring, and configuration processes. Experience in AWS implementation is required for this role. Please note that candidates from Odisha are preferred for this position.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
haryana
On-site
As a Full Stack Software Engineer at AgriChain, you will play a vital role in revolutionizing the agri-tech industry by developing fast and scalable software solutions using cutting-edge technologies. Your passion for crafting clean and reusable code will drive you to create impactful solutions for a global user base. Your responsibilities will include building high-performance software using Django and Python for backend operations, designing visually appealing user interfaces with React.js and material design, and establishing robust database models with Postgres and ORM. By automating tests and working in Agile sprints, you will ensure the smooth functioning of the software and contribute to a collaborative and energetic team environment. To excel in this role, you should possess a problem-solving mindset, have at least 2 years of relevant experience, and demonstrate expertise in Python, Django, Postgres, React.js, HTML, and CSS. Your ability to design for scalability, prioritize security and accessibility, and conduct unit testing will be key in delivering excellence. Proficiency in Git, AWS, Linux Servers, Docker, and Terraform will be advantageous, along with strong communication skills to articulate technical concepts effectively. Join us at AgriChain and be part of a dynamic team dedicated to innovation and excellence in software development.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
As a Mid-Level Data Engineer at our organization, you will be responsible for designing, developing, and maintaining scalable data pipelines using technologies such as Python and SQL. Your role will involve collaborating with data scientists, analysts, and business stakeholders to ensure that our data infrastructure meets the evolving needs of the organization. Your key responsibilities will include developing and implementing data quality checks, automating data ingestion processes from various sources to Snowflake, and utilizing cloud platforms like AWS, Azure, and GCP to manage data infrastructure. You will also work with Infrastructure as Code (IaC) tools such as Terraform for provisioning cloud resources. In this role, you will collaborate closely with data scientists and analysts to understand data requirements and translate them into technical solutions. Additionally, you will be expected to version control code using Git, document data pipelines and processes for future reference, and engage in knowledge sharing within the team. To qualify for this position, you should have a minimum of 3 years of experience in data engineering or a related field. You must have proven expertise in designing, developing, and deploying data pipelines, along with hands-on experience in Snowflake or Data Bricks, Python or Pyspark, and SQL. Experience with data warehousing, data modeling concepts, and cloud platforms is highly desirable. If you possess working knowledge of Terraform, Infrastructure as Code (IaC) principles, and Git for version control, along with excellent communication, collaboration, problem-solving, and analytical skills, you are an ideal candidate for this role. Candidates with experience in data orchestration tools such as Airflow, Luigi, and data visualization tools like Tableau, Power BI, will be considered as bonus points. Join us in building the infrastructure that drives data-driven decision making and contribute to our team's success!,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
As a .Net Developer Lead with over 8 years of experience, you will be responsible for leading a team of developers in Pune. Your expertise should include 7+ years of experience as a DotNet Developer and at least 3 years of full stack .NET programming. You should have in-depth knowledge of technologies such as .Net core, Angular 12+, Microservices, and Terraform. Your role will involve developing REST APIs and Micro Services while following Agile and Scrum methodologies and the Test-driven development process. You must be well-versed in working with both relational databases like SQL Server and non-relational databases like CosmosDB. Excellent communication skills in English are essential for this role, as you will be collaborating with various stakeholders. A passion for learning, continuous improvement, and working in a team environment is highly valued in our organization. If you are interested in this opportunity, please visit our career page for more details: https://careers.nitorinfotech.com/,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
This is a mission-critical role within Setu, a small, diverse, agile, and high-performance engineering team. As a Backend Engineer at Setu, you will play a pivotal role in various backend systems, from quick prototyping of proof-of-concept features to implementation of stable production code. You will be responsible for structuring and writing code that is easy to read and adheres to common principles and patterns, enhancing the overall efficiency of the team. The ideal candidate will have exceptional coding skills and a track record of producing clean, efficient, and maintainable code. Your role will involve orchestration and integration with the larger engineering team to seamlessly integrate your work into the ecosystem, contribute to CI/CD, testing, QA, and automation pipelines. Collaborating with DevOps, you will optimize cloud infrastructure and resource management to ensure smooth operations. Ownership is a key aspect of this role, where you will be responsible for end-to-end services and specialized components needed for various projects, ensuring their successful transition from the prototypal stage to production-ready. This may involve tasks ranging from document parsing to implementing solutions using deep learning techniques. In addition to technical responsibilities, you will also be tasked with creating well-crafted technical content such as articles, samples, whitepapers, and training materials. Your empathy towards building products and services for developers and customers, along with a solid understanding of infrastructure, will be instrumental in your success at Setu. You should be open to learning and adapting to new tools and technologies as the role evolves. The tech stack at Setu includes Backend technologies such as Python3, GoLang, Flask/FastAPI, and OpenAPI, while the Infrastructure is supported by AWS and Terraform. Working at Setu offers a unique opportunity to engage closely with the founding team that has built and scaled public infrastructure like UPI, GST, Aadhaar, and more. In addition to impactful work, Setu prioritizes employee growth by providing a fully stocked library, unlimited book budget, conference tickets, industry events, learning sessions with internal and external experts, and a learning and development allowance for subscriptions, courses, certifications, and more. Comprehensive health insurance, personal accident, and term life insurance, access to mental health counsellors, along with other benefits, ensure a supportive and inclusive work environment. Setu's culture code, "How We Move," emphasizes key behaviors including deciding fast and delivering right, mastering tasks with pride, leading with integrity, taking ownership, empowering others with trust and respect, innovating for the customer, and beyond. Join Setu if you aspire to be part of a team dedicated to building infrastructure that directly impacts financial inclusion and improves lives through an audacious mission and a commitment to craftsmanship in code.,
Posted 1 week ago
0.0 - 7.0 years
0 Lacs
Bengaluru, Karnataka
On-site
GE Healthcare Healthcare Patient Care Solutions Category Digital Technology / IT Mid-Career Job Id R4027485 Relocation Assistance Yes Location Bengaluru, Karnataka, India, 560066 Job Description Summary The Senior Site Reliability Engineer will be responsible for performance and availability of Compute and Network infrastructure consumed by all business segments. The Site Reliability teams are composed of highly talented individuals obsessively focused with availability through operational excellence. The ideal individual is relentlessly technical, passionate for automating everything and totally committed to delivering amazing customer experiences. GE HealthCare is a leading global medical technology and digital solutions innovator. Our purpose is to create a world where healthcare has no limits. Unlock your ambition, turn ideas into world-changing realities, and join an organization where every voice makes a difference, and every difference builds a healthier world. Job Description Roles and Responsibilities In this role, you will: Establish performance baseline, capacity thresholds, correlate events, and define monitoring/alerting criteria Develop automated solutions to address potential problems before they result in a service interruption Provide impact assessment and mitigation plan for changes going into the production environment Investigate root cause of severe and systemic outages, identify corrective actions and apply across the enterprise Develop availability measures that align with consumer experience to accurately assess the usability of crucial services Build capacity models to baseline transactional load compared to resource performance and leverage data to predict overall system capacity while automating load placement to avoid outages Identify thresholds for all critical links in the data path to quickly isolate where imbalances may result in potential outages Analyze failure points in services to model risk level and resolution steps if failure occurs. Assist in driving architecture enhancements into system to mitigate potential failure points. Programmatically monitor for and remediate configuration drift of critical devices Develop response plans to potential failure points and evaluate effectiveness during planned tests Perform comprehensive operational health checks of the entire services to identify areas of concern and track activities to drive improvements at all levels of the architecture Able to deliver well written Infrastructure as a code (TF or CF) and do code reviews for the peers for consistency across the board, able to write test case for the code and review the automation to enhance the capabilities as well as think holistically to design testing solutions. Have an experience working with tools like ALM tools would be a plus. Provide technical coaching and direction to more junior teammates Required Qualifications: Bachelor's Degree in Computer Science or STEM” Majors (Science, Technology, Engineering and Math) with at least years of experience 5-7 years Desired Qualifications: Excellent knowledge of common operating systems (Unix/Linux, Windows) • Excellent knowledge of TCP/IP networking, and inter-networking technologies (routing/switching, proxy, firewall, load balancing etc.)• Demonstrated experience scripting or developing software and services for the cloud Ruby, Python, Go, Java, Node.js, .NET, etc. Extensive Experience with Infrastructure Automation Experience using an automated configuration management system (Terraform, Chef, Puppet, Ansible, Salt, etc.) Experience deploying and managing infrastructure on public clouds such as AWS or Azure Experience with configuring, customizing, and extending monitoring tools (Datadog, Sensu, Grafana, Splunk, etc.) We expect all employees to live and breathe our behaviours: to act with humility and build trust; lead with transparency; deliver with focus, and drive ownership – always with unyielding integrity. Our total rewards are designed to unlock your ambition by giving you the boost and flexibility you need to turn your ideas into world-changing realities. Our salary and benefits are everything you’d expect from an organization with global strength and scale, and you’ll be surrounded by career opportunities in a culture that fosters care, collaboration and support. Inclusion and Diversity GE Healthcare is an Equal Opportunity Employer where inclusion matters. Employment decisions are made without regard to race, color, religion, national or ethnic origin, sex, sexual orientation, gender identity or expression, age, disability, protected veteran status or other characteristics protected by law. We expect all employees to live and breathe our behaviors: to act with humility and build trust; lead with transparency; deliver with focus, and drive ownership – always with unyielding integrity. Our total rewards are designed to unlock your ambition by giving you the boost and flexibility you need to turn your ideas into world-changing realities. Our salary and benefits are everything you’d expect from an organization with global strength and scale, and you’ll be surrounded by career opportunities in a culture that fosters care, collaboration and support. #LI-Hybrid #LI-MP2 Additional Information Relocation Assistance Provided: Yes
Posted 1 week ago
0.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Job Information Date Opened 07/30/2025 Job Type Full time Industry IT Services City Bangalore North State/Province Karnataka Country India Zip/Postal Code 560001 Job Description Proficiency in Java or Python (TypeScript is a plus). Hands-on experience with Kubernetes and Docker. Experience with cloud platforms (AWS, GCP, or Azure). Familiarity with CI/CD pipelines (e.g., GitHub Actions, Jenkins, ArgoCD). Experience with monitoring and alerting tools (e.g., Prometheus, Grafana, ELK). Working knowledge of Linux systems and networking. Experience with infrastructure-as-code tools (e.g., Terraform, Helm). Requirements Proficiency in Java or Python (TypeScript is a plus) Hands-on experience with Kubernetes and Docker Experience with cloud platforms (AWS, GCP, or Azure) Familiarity with CI/CD pipelines (e.g., GitHub Actions, Jenkins, ArgoCD) Experience with monitoring and alerting tools (e.g., Prometheus, Grafana, ELK) Working knowledge of Linux systems and networking Experience with infrastructure-as-code tools (e.g., Terraform, Helm)
Posted 1 week ago
0.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Job Information Date Opened 07/30/2025 Job Type Full time Industry IT Services City Bangalore North State/Province Karnataka Country India Zip/Postal Code 560001 Job Description Sr. AWS DevOps Engineer Cloud: AWS (expert - ) + Azure (familiar) o AWSexpertise hands on (VPC, EKS, storage, networking, RDS, redis, ELK, ACM, vault, KMS, Rabbit MQ …) o Azure familiarity (AKS, storage, cosmos, postgres, redis, Key Vault), Rabbit MQ, KeyCloak…) OS: Windows, Linux, Programming: PowerShell, Python, Shell Understanding of C#, Javascript – languages, build and deployment process SCM and Build Orchestrator: GitLab CICD, GitHub Actions Artifact Management: Artifactory Quality: SonarQube, megalinter, MSTest Automation Tools: Terraform, Ansible, Chef, Vault Containers/ Virtualization: Docker/ docker compose, K8S Process: GitOps, GitFlow, Branching, Versioning, Tagging, Release
Posted 1 week ago
0.0 - 12.0 years
0 Lacs
Delhi, Delhi
On-site
About us Bain & Company is a global management consulting that helps the world’s most ambitious change makers define the future. Across 65 offices in 40 countries, we work alongside our clients as one team with a shared ambition to achieve extraordinary results, outperform the competition and redefine industries. Since our founding in 1973, we have measured our success by the success of our clients, and we proudly maintain the highest level of client advocacy in the industry. In 2004, the firm established its presence in the Indian market by opening the Bain Capability Center (BCC) in New Delhi. The BCC is now known as BCN (Bain Capability Network) with its nodes across various geographies. BCN is an integral and largest unit of (ECD) Expert Client Delivery. ECD plays a critical role as it adds value to Bain's case teams globally by supporting them with analytics and research solutioning across all industries, specific domains for corporate cases, client development, private equity diligence or Bain intellectual property. The BCN comprises of Consulting Services, Knowledge Services and Shared Services. Who you will work with Pyxis leverages a broad portfolio of 50+ alternative datasets to provide real-time market intelligence and customer insights through a unique business model that enables us to provide our clients with competitive intelligence unrivaled in the market today. We provide insights and data via custom one-time projects or ongoing subscriptions to data feeds and visualization tools. We also offer custom data and analytics projects to suit our clients’ needs. Pyxis can help teams answer core questions about market dynamics, products, customer behavior, and ad spending on Amazon with a focus on providing our data and insights to clients in the way that best suits their needs. Refer to: www.pyxisbybain.com What you’ll do Setting up tools and required infrastructure Defining and setting development, test, release, update, and support processes for DevOps operation Have the technical skill to review, verify, and validate the software code developed in the project. Troubleshooting techniques and fixing the code bugs Monitoring the processes during the entire lifecycle for its adherence and updating or creating new processes for improvement and minimizing the wastage Encouraging and building automated processes wherever possible Identifying and deploying cybersecurity measures by continuously performing vulnerability assessment and risk management Incidence management and root cause analysis Selecting and deploying appropriate CI/CD tools Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline) Mentoring and guiding the team members Managing periodic reporting on the progress to the management About you A Bachelor’s or Master’s degree in Computer Science or related field 4 + years of software development experience with 3+ years as a devops engineer High proficiency in cloud management (AWS heavily preferred) including Networking, API Gateways, infra deployment automation, and cloud ops Knowledge of Dev Ops/Code/Infra Management Tools: (GitHub, SonarQube, Snyk, AWS X-ray, Docker, Datadog and containerization) Infra automation using Terraform, environment creation and management, containerization using Docker Proficiency with Python Disaster recovery, implementation of high availability apps / infra, business continuity planning What makes us a great place to work We are proud to be consistently recognized as one of the world's best places to work, a champion of diversity and a model of social responsibility. We are currently ranked the #1 consulting firm on Glassdoor’s Best Places to Work list, and we have maintained a spot in the top four on Glassdoor's list for the last 12 years. We believe that diversity, inclusion and collaboration is key to building extraordinary teams. We hire people with exceptional talents, abilities and potential, then create an environment where you can become the best version of yourself and thrive both professionally and personally. We are publicly recognized by external parties such as Fortune, Vault, Mogul, Working Mother, Glassdoor and the Human Rights Campaign for being a great place to work for diversity and inclusion, women, LGBTQ and parents.
Posted 1 week ago
0.0 - 10.0 years
0 Lacs
Chennai, Tamil Nadu
On-site
You deserve to do what you love, and love what you do – a career that works as hard for you as you do. At Fiserv, we are more than 40,000 #FiservProud innovators delivering superior value for our clients through leading technology, targeted innovation and excellence in everything we do. You have choices – if you strive to be a part of a team driven to create with purpose, now is your chance to Find your Forward with Fiserv. Responsibilities Requisition ID R-10367193 Date posted 07/30/2025 End Date 08/11/2025 City Chennai State/Region Tamil Nadu Country India Additional Locations Bengaluru, Karnataka Location Type Onsite Calling all innovators – find your future at Fiserv. We’re Fiserv, a global leader in Fintech and payments, and we move money and information in a way that moves the world. We connect financial institutions, corporations, merchants, and consumers to one another millions of times a day – quickly, reliably, and securely. Any time you swipe your credit card, pay through a mobile app, or withdraw money from the bank, we’re involved. If you want to make an impact on a global scale, come make a difference at Fiserv. Job Title Tech Lead, Solutions Architecture What does a successful DevOps Engineer do? A successful DevOps Engineer in our BFSI (Banking, Financial Services, and Insurance) Fintech IT organization is responsible for overseeing multiple technical projects, and ensuring the delivery of high-quality, scalable, and secure. They play a critical role in shaping the technical strategy, fostering a collaborative team environment, and driving innovation. Key responsibilities include: Infrastructure as Code (IaC): Use AWS CloudFormation or Terraform to provision and manage infrastructure consistently. CI/CD Pipelines: Build automated pipelines using AWS CodePipeline, CodeBuild, and CodeDeploy for seamless integration and delivery. Monitoring & Logging: Implement observability with AWS CloudWatch, X-Ray, and CloudTrail to track performance and security events. Security & Compliance: Manage IAM roles, enforce encryption, and integrate AWS Config and Security Hub for governance. Containerization: Deploy and manage containers using Amazon ECS, EKS, or Fargate. Serverless Architecture: Leverage AWS Lambda, API Gateway, and DynamoDB for lightweight, scalable solutions. Cost Optimization: Use AWS Trusted Advisor and Cost Explorer to monitor usage and reduce unnecessary spend. What you will need to have: Education: Bachelor's degree in a related field. Mandatory AWS Solution Architect Professional Certification Experience: 8 to 10 years of experience in DevOps Engineer role, with significant experience in the BFSI or fintech sector and a proven track record in managing teams and leading technical projects. Proficiency in scripting languages (Python, Bash) Experience with containerization (Docker, Kubernetes) Familiarity with Infrastructure as Code (IaC) Strong understanding of AWS services and architecture Knowledge of DevOps tools (Git, Jenkins, CodeDeploy) What would be great to have: Category Tools & Services : Automation CloudFormation, Terraform, Ansible CI/CD CodePipeline, Jenkins, GitHub Actions Monitoring CloudWatch, X-Ray, ELK Stack, Prometheus Security IAM, KMS, AWS WAF, GuardDuty, Security Hub Containers Docker, ECS, EKS, Fargate Serverless Lambda, API Gateway, Step Functions Storage & Compute EC2, S3, RDS, Auto Scaling Thank you for considering employment with Fiserv. Please: Apply using your legal name Complete the step-by-step profile and attach your resume (either is acceptable, both are preferable). Our commitment to Diversity and Inclusion: Fiserv is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, gender, gender identity, sexual orientation, age, disability, protected veteran status, or any other category protected by law. Note to agencies: Fiserv does not accept resume submissions from agencies outside of existing agreements. Please do not send resumes to Fiserv associates. Fiserv is not responsible for any fees associated with unsolicited resume submissions. Warning about fake job posts: Please be aware of fraudulent job postings that are not affiliated with Fiserv. Fraudulent job postings may be used by cyber criminals to target your personally identifiable information and/or to steal money or financial information. Any communications from a Fiserv representative will come from a legitimate Fiserv email address.
Posted 1 week ago
3.0 - 1.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Transforming the Future of Enterprise Planning At o9, our mission is to be the Most Value-Creating Platform for enterprises by transforming decision-making through our AI-first approach. By integrating siloed planning capabilities and capturing millions—even billions—in value leakage, we help businesses plan smarter and faster. This not only enhances operational efficiency but also reduces waste, leading to better outcomes for both businesses and the planet. Global leaders like Google, PepsiCo, Walmart, T-Mobile, AB InBev, and Starbucks trust o9 to optimize their supply chains. Job Title: DevOps Engineer - AI, R&D Location: Bengaluru, Karnataka, India (hybrid) About o9 Solutions: o9 Solutions is a leading enterprise AI software platform provider for transforming planning and decision-making capabilities. We are building the next generation of AI-powered solutions to help businesses optimize their operations and drive innovation. Our Next-Gen AI R&D team is at the forefront of this mission, pushing the boundaries of what's possible with artificial intelligence. About the Role... We are looking for a highly skilled and motivated DevOps Engineer to join our Next-Gen AI R&D team. In this role, you will be instrumental in developing and implementing MLOps strategies for Generative AI models, designing and managing CI/CD pipelines for ML workflows, and ensuring the robustness, scalability, and reliability of our AI solutions in production environments. What you will do in this role: Develop and implement MLOps strategies tailored for Generative AI models to ensure robustness, scalability, and reliability. Design and manage CI/CD pipelines specialized for ML workflows, including the deployment of generative models such as GANs, VAEs, and Transformers. Monitor and optimize the performance of AI models in production, employing tools and techniques for continuous validation, retraining, and A/B testing. Collaborate with data scientists and ML researchers to understand model requirements and translate them into scalable operational frameworks. Implement best practices for version control, containerization, infrastructure automation, and orchestration using industry-standard tools (e.g., Docker, Kubernetes). Ensure compliance with data privacy regulations and company policies during model deployment and operation. Troubleshoot and resolve issues related to ML model serving, data anomalies, and infrastructure performance. Stay up-to-date with the latest developments in MLOps and Generative AI, bringing innovative solutions to enhance our AI capabilities. What you'll have.. Must Have: Minimum 3 years of hands-on experience developing and deploying AI models in production environments with 1 year of experience in developing proofs of concept and prototypes. Strong background in software development, with experience in building and maintaining scalable, distributed systems. Strong programming skills in languages like Python and familiarity in ML frameworks and libraries (e.g., TensorFlow, PyTorch). Knowledge of containerization and orchestration tools like Docker and Kubernetes. Proficiency with MLOps tools such as MLflow, Kubeflow, Airflow, or similar for managing machine learning workflows and lifecycle. Practical understanding of generative AI frameworks (e.g., HuggingFace Transformers, OpenAI GPT, DALL-E). Expertise in containerization technologies like Docker and orchestration tools such as Kubernetes for scalable model deployment. Expertise in MLOps and LLMOps practices, including CI/CD for ML models. Nice to Have: Familiarity with cloud platforms (AWS, GCP, Azure) and their ML/AI service offerings. Experience with continuous integration and delivery tools such as Jenkins, GitLab CI/CD, or CircleCI. Experience with infrastructure as code tools like Terraform or CloudFormation. Experience with advanced GenAI applications such as natural language generation, image synthesis, and creative AI. Familiarity with experiment tracking and model registry tools. Knowledge of high-performance computing and parallel processing techniques. Contributions to open-source MLOps or GenAI projects. What we’ll do for you: Flat organization: With a very strong entrepreneurial culture (and no corporate politics). Great people and unlimited fun at work. Possibility to really make a difference in a scale-up environment. Support network: Work with a team you can learn from every day. Diversity: We pride ourselves on our international working environment. AI is firmly on every CEO's agenda, o9 @ Davos & Reflections: https://o9solutions.com/articles/why-ai-is-topping-the-ceo-agenda/ Work-Life Balance: https://youtu.be/IHSZeUPATBA?feature=shared Feel part of A team: https://youtu.be/QbjtgaCyhes?feature=shared How the process works... Respond with your interest to us. We’ll contact you either via video call or phone call - whatever you prefer, with the further schedule status. During the interview phase, you will meet with the technical panel for 60 minutes. We will contact you after the interview to let you know if we’d like to progress your application. There will be 2-3 rounds of technical discussion followed by a Managerial round. We will let you know if you’re the successful candidate. Good luck! Why Join o9 Solutions? At o9, you'll be at the forefront of AI innovation, working with a dynamic team that's shaping the future of enterprise solutions. We offer a stimulating and rewarding environment where your contributions directly impact cutting-edge projects. You'll gain invaluable experience with the latest AI technologies and significantly grow your skills. Join us and be a key player in building the next generation of intelligent solutions that truly transform businesses! More about us… At o9, transparency and open communication are at the core of our culture. Collaboration thrives across all levels—hierarchy, distance, or function never limit innovation or teamwork. Beyond work, we encourage volunteering opportunities, social impact initiatives, and diverse cultural celebrations. With a $3.7 billion valuation and a global presence across Dallas, Amsterdam, Barcelona, Madrid, London, Paris, Tokyo, Seoul, and Munich, o9 is among the fastest-growing technology companies in the world. Through our aim10x vision, we are committed to AI-powered management, driving 10x improvements in enterprise decision-making. Our Enterprise Knowledge Graph enables businesses to anticipate risks, adapt to market shifts, and gain real-time visibility. By automating millions of decisions and reducing manual interventions by up to 90%, we empower enterprises to drive profitable growth, reduce inefficiencies, and create lasting value. o9 is an equal-opportunity employer that values diversity and inclusion. We welcome applicants from all backgrounds, ensuring a fair and unbiased hiring process. Join us as we continue our growth journey!
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
pune, maharashtra
On-site
As a System Engineer, you will be responsible for designing and configuring Linux distributions and Windows servers with a minimum of 2 years of experience in the field. Your role will involve system customization and development, as well as setting up cloud platforms in Azure. You should be able to effectively communicate the value and progress of projects to stakeholders, while also being proficient in server-side scripting languages like Python. Experience in DevOps and CI/CD techniques including GIT, Jenkins, Bamboo, and SonarQube is required for this role. Additionally, familiarity with infrastructure automation tools such as Ansible and Terraform will be advantageous. Experience in containerization tools like Docker and Kubernetes orchestration is considered a plus. Candidates with hands-on experience in Azure cloud platform and proficiency in Python language will be given preference for this position.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
noida, uttar pradesh
On-site
Genpact is a global professional services and solutions firm that aims to deliver outcomes shaping the future. With over 125,000 employees in more than 30 countries, we are driven by curiosity, entrepreneurial agility, and the desire to create lasting value for our clients. Our purpose, the relentless pursuit of a world that works better for people, drives us to serve and transform leading enterprises, including the Fortune Global 500, leveraging our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. We are currently seeking applications for the position of Senior Principal Consultant- DevSec Ops Sr engineer. As a Senior Principal Consultant, your primary responsibility will be to lead the quality assurance department of any industry. You will ensure that the team of QA engineers stays on track throughout the project, resolve conflicts, review schedules and plans, mitigate risks, check quality in phases, update management, and foster a challenging and motivating environment. Your responsibilities will include: - Architecting and designing solutions that meet both functional and non-functional requirements. - Creating and reviewing architecture and solution design artifacts. - Providing technical direction and leading a group of one or more architects towards a common goal. - Enforcing adherence to architectural standards, globally followed product-specific guidelines, usability design standards, etc. - Analyzing and operating at various levels of abstraction. - Ensuring suitable sizing of solutions, technology fit, and disaster recovery are assessed and accounted for. Qualifications we seek in you! Minimum Qualifications / Skills: - Bachelor's degree in Computer Science, Engineering, or a related field. - Azure certification with multiple years of Azure Architecture experience. - Experience in setting up Azure Landing zone, foundation components in a greenfield environment from scratch. - Multiple years of Azure DevOps experience in creating and managing ADO Yaml pipelines. - Experience in using ADO Yaml pipelines to create infrastructure using Terraform and deploy applications into multiple Azure subscriptions. Preferred Qualifications / Skills: - Ability to work individually hands-on to demonstrate and deliver projects. - Effective communication with customers and managing stakeholder expectations. This position is based in India-Noida and is a full-time role. The education level required is Bachelor's degree or equivalent. The job posting date is Oct 4, 2024, and the unposting date is ongoing. The primary job category is Full Time Consulting.,
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France