Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About us: Where elite tech talent meets world-class opportunities! At Xenon7, we work with leading enterprises and innovative startups on exciting, cutting-edge projects that leverage the latest technologies across various domains of IT including Data, Web, Infrastructure, AI, and many others. Our expertise in IT solutions development and on-demand resources allows us to partner with clients on transformative initiatives, driving innovation and business growth. Whether it's empowering global organizations or collaborating with trailblazing startups, we are committed to delivering advanced, impactful solutions that meet today's most complex challenges. We are building a community of top-tier experts and we're opening the doors to an exclusive group of exceptional AI & ML Professionals ready to solve real-world problems and shape the future of intelligent systems. Structured Onboarding Process We ensure every member is aligned and empowered: Screening - We review your application and experience in Data & AI, ML engineering, and solution delivery Technical Assessment - 2-step technical assessment process that includes an interactive problem-solving test, and a verbal interview about your skills and experience Matching you to Opportunity - We explore how your skills align with ongoing projects and innovation tracks Who We're Looking For We're looking for a Senior MLOps Engineer with deep expertise in the Databricks ecosystem to help us build and scale reliable, secure, and automated ML platforms across enterprise environments. You'll work closely with data scientists, ML engineers, DevOps teams, and cloud architects to implement and maintain production-grade machine learning infrastructure using best practices in MLOps, CI/CD, and cloud-native services. This is a hands-on technical leadership role ideal for engineers who can work across the entire ML lifecycle—from experiment tracking to scalable deployment—while championing automation, governance, and performance. If you're driven by curiosity and eager to influence how AI shapes the future, this is your platform. Requirements 6+ years of professional experience in DevOps, DataOps, or MLOps roles 3+ years hands-on with Databricks, including Delta Lake, MLflow, and cluster/workflow administration Strong experience in CI/CD, infrastructure as code (Terraform, GitOps), and Python-based automation Solid understanding of ML lifecycle management, experiment tracking, model registries, and automated deployment pipelines Deep knowledge of AWS (EKS, IAM, Lambda, CloudFormation or Terraform) and/or Azure (ADLS, Azure DevOps, ACR) Experience working with containerized environments, including Kubernetes and Helm Familiarity with data governance and access control frameworks like Unity Catalog Strong scripting and programming skills in Python, Shell, and YAML/JSON Benefits At Xenon7, we're not just building AI systems—we're building a community of talent with the mindset to lead, collaborate, and innovate together. Ecosystem of Opportunity: You'll be part of a growing network where client engagements, thought leadership, research collaborations, and mentorship paths are interconnected. Whether you're building solutions or nurturing the next generation of talent, this is a place to scale your influence Collaborative Environment: Our culture thrives on openness, continuous learning, and engineering excellence. You'll work alongside seasoned practitioners who value smart execution and shared growth Flexible & Impact-Driven Work: Whether you're contributing from a client project, innovation sprint, or open-source initiative, we focus on outcomes—not hours. Autonomy, ownership, and curiosity are encouraged here Talent-Led Innovation: We believe communities are strongest when built around real practitioners. Our Innovation Community isn't just a knowledge-sharing forum—it's a launchpad for members to lead new projects, co-develop tools, and shape the direction of AI itself
Posted 3 weeks ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Experience: 10+ years in Software Development, 5+ years in Architectural Roles 📍 Location : Hyderabad / Work from Office 💰 Compensation : 30 LPA Fixed 📝 Job Type : Full-Time NOTE: Only immediate or Serving Notice Period candidate has to apply for this position. Job Description: We are looking for a highly skilled Application Architect to design and oversee the implementation of scalable, high-performance applications . The ideal candidate will have strong expertise in enterprise architecture, microservices, event-driven design, and full-stack development . This role requires both technical expertise and leadership to align technology with business objectives and drive digital transformation. Key Responsibilities: Architect and design scalable, resilient, and secure enterprise applications . Define end-to-end architecture across backend, frontend, database, and cloud infrastructure. Collaborate with business leaders, product teams, and developers to align technical solutions with business goals. Establish and enforce best practices for microservices, API-first design, and event-driven architectures . Evaluate, recommend, and integrate new technologies, platforms, and tools based on business needs. Lead software design reviews, technical discussions, and system optimizations . Guide teams on agile best practices , DevOps, and CI/CD implementation. Drive cloud adoption strategies across AWS, Azure, or GCP. Ensure high availability, fault tolerance, and scalability of applications . Provide technical mentorship to engineers and conduct architectural reviews. Optimize performance, security, and cost-efficiency of enterprise applications. Troubleshoot complex system-wide issues and implement effective solutions. Required Skills: Strong software architecture and design principles (irrespective of language). Hands-on experience with Python / Java for application development . Expertise in microservices, event-driven architectures, and API design . Strong enterprise architecture knowledge and implementation experience. Deep understanding of backend and frontend development (React, Angular, Vue, Spring Boot, Django, FastAPI). Experience in cloud-native development with AWS, Azure, or GCP . Hands-on experience with containerization (Docker, Kubernetes) and orchestration . Strong understanding of database technologies (SQL, NoSQL, distributed databases). Proficiency in DevOps, CI/CD pipelines, GitOps, Terraform, and Infrastructure as Code (IaC) . Strong troubleshooting, problem-solving, and performance optimization skills . Ability to clearly present technical concepts to both technical and non-technical audiences . Agile development & project management experience. Preferred Skills: Experience with GraphQL, WebSockets, and async processing . Knowledge of message brokers (Kafka, RabbitMQ, SQS, Pub/Sub) . Experience with AI/ML, big data architectures, and streaming data processing . Exposure to enterprise security best practices . Understanding of serverless architectures (AWS Lambda, Azure Functions, GCP Cloud Functions) . Drop your updated resume to Dhaarani@reveilletechnologies.com
Posted 3 weeks ago
5.0 years
0 Lacs
Delhi
On-site
Customer Solutions New Delhi, India R03312 Description We’re not just building better tech. We’re rewriting how data moves and what the world can do with it. With Confluent, data doesn’t sit still. Our platform puts information in motion, streaming in near real-time so companies can react faster, build smarter, and deliver experiences as dynamic as the world around them. It takes a certain kind of person to join this team. Those who ask hard questions, give honest feedback, and show up for each other. No egos, no solo acts. Just smart, curious humans pushing toward something bigger, together. One Confluent. One Team. One Data Streaming Platform. About the Role: Solutions Engineers at Confluent drive not only the early-stage evaluation within the sales process, but also play a crucial role in enabling ongoing value-realization for customers, all while helping them move up the adoption maturity curve. In this role you’ll partner with Account Executives to be the key technical advisor in service of the customer. You’ll be instrumental in surfacing the customers’ stated or implicit Business Needs, and coming up with Technical Designs to best meet these needs. You may find yourself at times facilitating art of the possible discussions and storytelling to inspire customers in adopting new patterns with confidence, and at other times driving creative solutioning to help get past difficult technical roadblocks. Overall, we look upon Solutions Engineers to be a key cog within the Customer Success Team that help foster an environment of sustained success for the customer and incremental adoption of Confluent’s Technology. What You Will Do: Help advance new & innovative data streaming use-cases from conception to go-live Execute on and lead technical proof of concepts Conduct discovery & whiteboard Sessions to develop new use-cases Provide thought Leadership by delivering technical talks and workshops Guide customers with hands-on help and best practice to drive operational maturity of their Confluent deployment Analyze customer consumption trends and identify optimization opportunities Work closely with product and engineering teams, and serve as a key product advocate across the customer, partner and Industry ecosystem Forge strong relationships with key customer stakeholders and serve as a dependable partner for them What You Will Bring: 5+ years of Sales/Pre-Sales/Solutions Engineering or similar customer facing experience in the software sales or implementation space Experience with event-driven architecture, data integration & processing techniques, database & data warehouse technologies, or related fields First-Hand exposure to cloud architecture, migrations, deployment & application development Experience with DevOps/Automation, GitOps or Kubernetes Ability to read & write Java, Python or SQL Clear, consistent demonstration of self-starter behavior, a desire to learn new things and tackle hard technical problems Exceptional presentation and communications capabilities. Confidence presenting to a highly skilled and experienced audience, ranging from developers to enterprise architects and up to C-level executives What Gives You an Edge: Technical certifications - cloud developer/architect, data engineering & integration Familiarity with solution or value Selling A challenger mindset and an ability to positively influence peoples’ opinions Ready to build what's next? Let’s get in motion. Come As You Are Belonging isn’t a perk here. It’s the baseline. We work across time zones and backgrounds, knowing the best ideas come from different perspectives. And we make space for everyone to lead, grow, and challenge what’s possible. We’re proud to be an equal opportunity workplace. Employment decisions are based on job-related criteria, without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, or any other classification protected by law.
Posted 3 weeks ago
5.0 years
0 Lacs
Mumbai
On-site
Customer Solutions Mumbai, India R03313 Description We’re not just building better tech. We’re rewriting how data moves and what the world can do with it. With Confluent, data doesn’t sit still. Our platform puts information in motion, streaming in near real-time so companies can react faster, build smarter, and deliver experiences as dynamic as the world around them. It takes a certain kind of person to join this team. Those who ask hard questions, give honest feedback, and show up for each other. No egos, no solo acts. Just smart, curious humans pushing toward something bigger, together. One Confluent. One Team. One Data Streaming Platform. About the Role: Solutions Engineers at Confluent drive not only the early-stage evaluation within the sales process, but also play a crucial role in enabling ongoing value-realization for customers, all while helping them move up the adoption maturity curve. In this role you’ll partner with Account Executives to be the key technical advisor in service of the customer. You’ll be instrumental in surfacing the customers’ stated or implicit Business Needs, and coming up with Technical Designs to best meet these needs. You may find yourself at times facilitating art of the possible discussions and storytelling to inspire customers in adopting new patterns with confidence, and at other times driving creative solutioning to help get past difficult technical roadblocks. Overall, we look upon Solutions Engineers to be a key cog within the Customer Success Team that help foster an environment of sustained success for the customer and incremental adoption of Confluent’s Technology. What You Will Do: Help advance new & innovative data streaming use-cases from conception to go-live Execute on and lead technical proof of concepts Conduct discovery & whiteboard Sessions to develop new use-cases Provide thought Leadership by delivering technical talks and workshops Guide customers with hands-on help and best practice to drive operational maturity of their Confluent deployment Analyze customer consumption trends and identify optimization opportunities Work closely with product and engineering teams, and serve as a key product advocate across the customer, partner and Industry ecosystem Forge strong relationships with key customer stakeholders and serve as a dependable partner for them What You Will Bring: 5+ years of Sales/Pre-Sales/Solutions Engineering or similar customer facing experience in the software sales or implementation space Experience with event-driven architecture, data integration & processing techniques, database & data warehouse technologies, or related fields First-Hand exposure to cloud architecture, migrations, deployment & application development Experience with DevOps/Automation, GitOps or Kubernetes Ability to read & write Java, Python or SQL Clear, consistent demonstration of self-starter behavior, a desire to learn new things and tackle hard technical problems Exceptional presentation and communications capabilities. Confidence presenting to a highly skilled and experienced audience, ranging from developers to enterprise architects and up to C-level executives What Gives You an Edge: Technical certifications - cloud developer/architect, data engineering & integration Familiarity with solution or value Selling A challenger mindset and an ability to positively influence peoples’ opinions Ready to build what's next? Let’s get in motion. Come As You Are Belonging isn’t a perk here. It’s the baseline. We work across time zones and backgrounds, knowing the best ideas come from different perspectives. And we make space for everyone to lead, grow, and challenge what’s possible. We’re proud to be an equal opportunity workplace. Employment decisions are based on job-related criteria, without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, or any other classification protected by law.
Posted 3 weeks ago
5.0 years
5 - 9 Lacs
Bengaluru
On-site
Rystad Energy is a leading global independent research and energy intelligence company dedicated to helping clients navigate the future of energy. By providing high-quality data and thought leadership, our international team empowers businesses, governments and organizations to make well-informed decisions. Our extensive portfolio of products and solutions covers all aspects of global energy fundamentals, spanning every corner of the oil and gas industry, renewables, clean technologies, supply chain and power markets. Headquartered in Oslo, Norway, with an expansive global network, our data, analysis, advisory and education services provide clients a competitive edge in the market. For more information, visit www.rystadenergy.com. Role Our DevOps Engineers play a key role in designing and implementing robust CI/CD pipelines, creating and optimizing our infrastructure, and developing the applications and services that support and enhance our platform. They also improve our Internal Developer Platform (IDP), ensuring secure, scalable, and seamless deployments to meet the demands of our global operations. If you’re passionate about driving innovation in the cloud, building resilient infrastructure, and developing powerful supporting applications, we’d love to have you on board. Requirements Required Skills Proficient in English, both spoken and written Experience using Infrastructure as Code (IAC) Skilled in writing and operating applications and services Excellent communication and collaboration skills Familiar with GitOps practices and tools, such as ArgoCD or Flux Experience in hybrid infrastructure setups (on-prem + cloud) Able to troubleshoot and resolve issues related to infrastructure, networking, deployments, applications, and performance Capable of working independently and taking ownership of assigned projects and tasks A proactive, independent mindset with a passion for learning and collaboration Knowledgeable in security best practices and able to implement them in a DevOps context Preferred Skills 5-8 years of experience in software development, with at least 3-5 years in a DevOps role Azure Kubernetes Terraform Ansible Helm ArgoCD Good scripting skills (Bash, PowerShell, or Python is a plus) Linux Docker Databases (MSSQL, Redis, RabbitMQ, MongoDB, PostgreSQL, etc.) Observability stack (Grafana, Prometheus, Loki, OpenTelemetry, etc.) Azure DevOps, Bitbucket Nginx Responsibilities Automate deployments and build robust CI/CD pipelines to support global workloads Improve and maintain our Internal Developer Platform (IDP) to ensure security, efficiency, and scalability Design, build, and operate applications and services that support infrastructure and cloud environments Troubleshoot and resolve issues related to infrastructure, networking, deployments, applications, and performance Stay up to date with the latest trends and develop expertise in emerging cloud technologies Set up and optimize monitoring, alerting, and incident response processes Proactively identify and resolve performance, reliability, and security issues Collaborate with development teams to integrate SRE best practices into their workflows Conduct post-mortems and root cause analyses on incidents Qualifications Education: Bachelor’s degree in Computer Science or related field (a plus) Certified Kubernetes Administrator (CKA), or CKAD certification (a plus), Azure Solutions Architect (a plus) Benefits Lean, flat, non-hierarchical structure that will enable you to impact products and workflows A diverse, inclusive, meritocratic culture Community driven with the desire to create and share to have an impact globally Keen to challenge your skills and deepen existing expertise while learning new ones A global and quickly expanding international business culture with more than 80 nationalities Inclusive and supportive working environment with a focus on a culture of sharing Opportunity to join a globally leading energy knowledge house. Flexible work environment
Posted 3 weeks ago
3.0 - 5.0 years
5 - 9 Lacs
Bengaluru
On-site
Rystad Energy is a leading global independent research and energy intelligence company dedicated to helping clients navigate the future of energy. By providing high-quality data and thought leadership, our international team empowers businesses, governments and organizations to make well-informed decisions. Our extensive portfolio of products and solutions covers all aspects of global energy fundamentals, spanning every corner of the oil and gas industry, renewables, clean technologies, supply chain and power markets. Headquartered in Oslo, Norway, with an expansive global network, our data, analysis, advisory and education services provide clients a competitive edge in the market. For more information, visit www.rystadenergy.com. Role We are looking for a DevOps Engineer to join our growing platform team in Bangalore. In this role, you’ll support the development and operation of CI/CD pipelines, help maintain and improve our cloud infrastructure, and work on tools that enhance our development and deployment processes. If you’re eager to grow your expertise in cloud technologies, automation, and infrastructure management while working in a collaborative global environment, we’d be excited to hear from you. Requirements Required Skills Strong written and spoken English Familiarity with Infrastructure as Code (e.g., Terraform, Ansible) Hands-on experience with at least one major cloud platform (Azure preferred) Understanding of containerization (Docker) and orchestration (Kubernetes basics) Basic scripting abilities (Bash, PowerShell, or Python) Exposure to CI/CD tools and version control systems (e.g., Azure DevOps, Bitbucket) Enthusiasm for automation, troubleshooting, and process improvement Ability to work both independently and as part of a collaborative team Preferred Skills Hands-on experience with Azure or other cloud environments Exposure to Kubernetes, Helm, and GitOps tools like ArgoCD or Flux Experience with monitoring tools (Grafana, Prometheus) Understanding of networking, system performance, and security best practices Basic knowledge of databases (MSSQL, PostgreSQL, or similar) Willingness to learn and take ownership of tasks in a dynamic environment Responsibilities Assist in building and maintaining CI/CD pipelines for smooth and automated deployments Contribute to the development and enhancement of our Internal Developer Platform (IDP) Support and manage cloud-based and on-premise infrastructure components Troubleshoot issues related to infrastructure, deployments, and application performance Participate in monitoring, alerting, and incident response processes Collaborate with senior engineers and development teams to adopt DevOps and SRE best practices Document infrastructure processes, playbooks, and configuration changes Stay current with new tools and technologies in the DevOps ecosystem Qualifications Bachelor’s degree in computer science, Engineering, or a related technical field (preferred but not mandatory) 3–5 years of total professional experience, with 2–3 years in a DevOps or related infrastructure-focused role Benefits Lean, flat, non-hierarchical structure that will enable you to impact products and workflows A diverse, inclusive, meritocratic culture Community driven with the desire to create and share to have an impact globally Keen to challenge your skills and deepen existing expertise while learning new ones A global and quickly expanding international business culture with more than 80 nationalities Inclusive and supportive working environment with a focus on a culture of sharing Opportunity to join a globally leading energy knowledge house. Flexible work environment
Posted 3 weeks ago
12.0 years
0 Lacs
India
On-site
Job Title: Staff Engineer (12+ Years Experience) About the Role We are seeking a seasoned Platform Engineer with 12+ years of experience to join our platform engineering team. This role will play a critical part in designing, building, and maintaining the internal platforms and tools that enable software development teams to work efficiently and effectively. As a Staff Engineer, you will play a pivotal role in designing and implementing golden paths to streamline development processes across the organization. Your expertise will be crucial in enhancing application observability, ensuring robust monitoring and diagnostics capabilities. You will collaborate closely with cross-functional teams to create scalable, resilient, and efficient platforms that support the organization's strategic goals and drive innovation. Key Responsibilities Design, develop, and maintain robust cloud platforms (e.g., AWS, Azure, Google Cloud). Enhance monitoring and diagnostics capabilities to ensure high availability and performance. Assist in building technical roadmap and advancing technical capabilities of the platform engineering team. Work closely with cross-functional teams to align platform capabilities with organizational goals. Identify and resolve complex technical issues, ensuring the stability and scalability of the platforms. Lead and mentor a team of developers and engineers, fostering a collaborative and innovative environment. Key Requirements 12+ years of hands-on experience in platform engineering, DevOps, or related fields. Minimum 5+ years of experience in Application development using .net and Java tech stack. Technical lead who has consistently demonstrated the capability to develop high level technical design Help in building product technical roadmap and advancing technical capabilities Advanced knowledge in AWS and Azzure Infrastructure Advanced knowledge in scripting like Python Advanced knowledge in IaC tools Terraform or CloudFormation Advanced knowledge in designing and implementing CI/CD pipeline with tools like Jenkins, CodePipeline, or similar Good knowledge of version control tools like GIT, Bitbucket or Team Foundation Server Good knowledge of working on containers with Docker and Kubernetes Good knowledge in configuration management tools like Ansible, Chef or Puppet Hands-on experience in Code Reviews and Design Reviews. Nice to Have AWS Certifications or similar Serverless automation Experience with GitOps workflows and progressive delivery strategies. Experience with System & IT operation – Windows and Linux OS administration. Understanding of networking principles and technologies (DNS, Load Balancers, Reverse Proxies), Microsoft Active Directory and Active Directory Federation Services (ADFS).
Posted 3 weeks ago
4.0 years
0 Lacs
Ahmedabad
On-site
Job Title: Certified DevOps Engineer – AWS (Urgent) / Microsoft / Oracle / Adobe / Cisco Experience: 4 to 7 Years Location: Ahmedabad Company: GMIndia Pvt. Ltd. About GMIndia Pvt. Ltd.: GMIndia Pvt. Ltd. is an innovation-driven IT solutions company delivering future-ready technology and automation services across industries. As we grow our DevOps capabilities, we are actively hiring certified DevOps Engineers to lead cloud and infrastructure transformation. Immediate priority is given to AWS-certified professionals. Position Overview: We are seeking a results-oriented Certified DevOps Engineer with 4–7 years of experience and valid certifications in AWS, Microsoft Azure, Oracle Cloud, Adobe Experience Cloud, or Cisco DevNet . The role demands strong experience in cloud platforms, infrastructure automation, CI/CD pipelines, and containerized deployments. Key Responsibilities: Design and implement CI/CD pipelines to accelerate development and delivery cycles Automate infrastructure using tools like Terraform, Ansible, or CloudFormation Deploy, manage, and monitor systems across AWS, Azure, Oracle, Cisco, or Adobe Cloud environments Collaborate with cross-functional teams for seamless application delivery and integration Manage containerized deployments using Docker and Kubernetes Implement effective system monitoring, logging, backup, and disaster recovery strategies Ensure infrastructure security and compliance with cloud best practices Required Skills & Certifications: 4 to 7 years of hands-on DevOps/Cloud experience Mandatory: Valid certification in one or more of the following (only certified candidates will be considered): AWS Certified DevOps Engineer / Solutions Architect (High Priority) Microsoft Certified: Azure DevOps Engineer Expert Oracle Cloud Infrastructure (OCI) Certified Adobe Certified Expert (Experience Cloud / AEM) Cisco Certified DevNet Professional or Specialist Proficient in scripting (Python, Bash, PowerShell) Strong knowledge of containerization (Docker) and orchestration (Kubernetes) Experience with monitoring tools like ELK Stack, Grafana, Prometheus, CloudWatch, etc. Familiar with DevSecOps practices and secure deployment standards Preferred Qualifications: Experience with hybrid or multi-cloud deployments Familiarity with GitOps, serverless architecture, or edge computing Exposure to agile development practices and tools (JIRA, Confluence) Why Join GMIndia Pvt. Ltd.? Urgent opportunity for AWS-certified engineers – immediate onboarding Work on challenging, cloud-native projects with cutting-edge technologies Supportive and collaborative work culture Competitive salary and long-term career growth opportunities Apply Now if you are a certified DevOps expert ready to take on cloud innovation challenges with a growing tech leader in Ahmedabad. Job Type: Full-time Benefits: Provident Fund Schedule: Day shift Monday to Friday Work Location: In person Speak with the employer +91 8925954884
Posted 3 weeks ago
5.0 years
3 - 5 Lacs
Noida
On-site
Position type- Full Time Work Location- Bangalore/Gurugram/Noida Working style- Hybrid People Manager role: No Required education and certifications critical for the role- Bachelor's degree in Computer Science, Information Security, or related field Required years of experience – Minimum 5+ years of relevant experience AON IS IN THE BUSINESS OF BETTER DECISIONS At Aon, we shape decisions for the better to protect and enrich the lives of people around the world. As an organization, we are committed as one firm to our purpose, united through trust as one inclusive, diverse team and we are passionate about helping our colleagues and clients succeed. GENERAL DESCRIPTION OF ROLE: The Platform Security Engineering team under the Counter Threat Engineering organization is seeking a highly skilled and motivated Security Engineer. This team leverages DevOps and GitOps practices to provide engineering support for security platforms such as email, endpoint, cloud, security monitoring, and security telemetry that are critical to the defense of Aon. Key Responsibilities Design, build, maintain, and support security platforms within the scope of Security Platform Engineering. Develop and maintain automation scripts and tools to streamline security operations using DevOps and GitOps practices. Work with cross-functional teams to integrate security solutions with other security and IT systems. Stay up-to-date with the latest security threats, trends, and technologies to ensure Aon's defenses remain robust. Lead and coordinate various tasks with other teams related to department’s initiatives/projects Engage and work with vendors Research and evaluate new capabilities in Security Platform Engineering space Assist in enhancement of security platforms, ensuring integration and interoperability among email, endpoint, cloud, and telemetry systems. Provide technical guidance and support to other team members and stakeholders regarding security best practices. Collaborate with the security operations team and IT partners to troubleshoot and analyze security and production incidents, implementing preventive measures across multiple platforms. Experience working in a global team On-call Skills and Qualifications Bachelor's degree in Computer Science, Information Security, or related field. 5+ years of experience in cybersecurity or a related field. Strong understanding of security protocols and technologies across email, endpoint, cloud, and telemetry platforms. Experience with security tools and platforms, such as EDR, CSPM, SIEM, Inbound/outbound email security controls, endpoint security, etc. Proficiency in scripting languages such as Python, PowerShell, or Bash. Familiarity with DevOps and GitOps practices, including CI/CD pipelines and version control systems like Git. Excellent problem-solving skills and the ability to work under pressure. Strong communication and collaboration skills. How we support our colleagues In addition to our comprehensive benefits package, we encourage an inclusive workforce. Plus, our agile environment allows you to manage your wellbeing and work/life balance, ensuring you can be your best self at Aon. Furthermore, all colleagues enjoy two “Global Wellbeing Days” each year, encouraging you to take time to focus on yourself. We offer a variety of working style solutions for our colleagues as well. Our continuous learning culture inspires and equips you to learn, share and grow, helping you achieve your fullest potential. As a result, at Aon, you are more connected, more relevant, and more valued. Aon values an innovative and inclusive workplace where all colleagues feel empowered to be their authentic selves. Aon is proud to be an equal opportunity workplace. Aon provides equal employment opportunities to all employees and applicants for employment without regard to race, color, religion, creed, sex, sexual orientation, gender identity, national origin, age, disability, veteran, marital, domestic partner status, or other legally protected status. We are committed to providing equal employment opportunities and fostering an inclusive workplace. If you require accommodations during the application or interview process, please let us know. You can request accommodations by emailing us at ReasonableAccommodations@Aon.com or your recruiter. We will work with you to meet your needs and ensure a fair and equitable experience. #LI-RK2 2564411
Posted 3 weeks ago
7.0 years
40 Lacs
Faridabad, Haryana, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, PySpark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
7.0 years
40 Lacs
Greater Hyderabad Area
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, PySpark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
0 years
0 Lacs
Delhi, India
On-site
What is Hunch? Hunch is a dating app that helps you land a date without swiping like a junkie. Designed for people tired of mindless swiping and commodified matchmaking, Hunch leverages a powerful AI-engine to help users find meaningful connections by focusing on personality over just looks. With 2M+ downloads and a 4.4-star rating, Hunch is going viral in the US by challenging the swipe-left/right norm of traditional apps. Hunch a Series A funded ($23 Million) startup building the future of social discovery in a post-AI world. Link to our fundraising announcement Key offerings of Hunch: Swipe Less, Vibe More: Curated profiles, cutting the clutter of endless swiping. Personality Matters: Opinion-based, belief-based, and thought-based compatibility rather than just focusing on looks. Every Match, Verified: No bots, no catfishing—just real, trustworthy connections Match Scores: Our AI shows compatibility percentages, helping users identify their “100% vibe match.” We're looking for a highly motivated and skilled DevOps Engineer. Expected Key Deliverables: Infrastructure Management: Design, implement, and manage cloud-based infrastructure using platforms like AWS and Google Cloud to support our applications and services. Leverage IaC tools such as Terraform to automate provisioning, ensure consistency, and enable scalable infrastructure management. Continuous Integration and Deployment (CI/CD): Design, develop, and maintain automated CI/CD pipelines using GitHub Actions/Jenkins for continuous integration and ArgoCD for declarative, GitOps-based continuous deployment, enabling rapid, reliable, and auditable software releases across environments. Configuration Management: Implement and manage configuration management tools to ensure consistency across environments. Monitoring and Alerting: Set up and maintain robust monitoring tools like Prometheus for metrics-based monitoring, ELK Stack for log aggregation and analysis, and New Relic for real-time application performance monitoring. Implement alerting systems to ensure timely detection and resolution of system and application issues. Containerization and Orchestration: Utilize containerization platforms (e.g., Docker) and orchestration tools (e.g., Kubernetes) to optimize deployment and scalability, leveraging Karpenter or Cluster AutoScaler for dynamic, cost-efficient node provisioning and workload autoscaling. Security and Compliance: Implement security best practices and ensure compliance with industry standards and regulations. Conduct regular security assessments. Collaboration and Communication: Work closely with development, QA, and operations teams to ensure smooth integration and deployment processes. Automation and Scripting: Develop and maintain scripts for automated provisioning, deployment, and system maintenance tasks. Backup and Disaster Recovery: Establish and maintain backup and disaster recovery processes to ensure data integrity and availability. Performance & Cost Optimization: Identify and address performance and cost bottlenecks in the infrastructure and application stack. Documentation and Knowledge Sharing: Create and maintain comprehensive documentation for system configurations, processes, and procedures. Share knowledge with the team. Incident Response and Troubleshooting: Participate in incident response activities and perform root cause analysis for system outages or performance issues. What we have to offer Competitive financial rewards + annual PLI (Performance Linked Incentives). Meritocracy-driven, candid, and diverse culture. Employee benefits like Medical Insurance One annual all expenses paid by company trip for all employees to bond Although we work from our office in New Delhi, we are flexible in our style and approach Life @Hunch Work Culture: At Hunch we take our work seriously but don’t take ourselves too seriously. Everyone is encouraged to think as owners and not renters, and we prefer to let builders build, empowering people to pursue independent ideas. Impact: Your work will shape the future of social engagement and connect people around the world. Collaboration: Join a diverse team of creative minds and be part of a supportive community. Growth: We invest in your development and provide opportunities for continuous learning. Backed by Global Investors: Hunch is a Series A funded startup, backed by Hashed, AlphaWave, Brevan Howard and Polygon Studios Experienced Leadership: Hunch is founded by a trio of industry veterans - Ish Goel (CEO), Nitika Goel (CTO), and Kartic Rakhra (CMO) - serial entrepreneurs with the last exit from Nexus Mutual, a web3 consumer-tech startup.
Posted 3 weeks ago
3.0 - 6.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Title: DevOps Engineer (Kubernetes and Azure) Location: NOIDA (Onsite) Job Type: Full-time Experience: 3-6 years of Experience in the related field. Salary: Upto 15 LPA (based on your experience) About Us: DeepSpatial is a technology company that specializes in leveraging artificial intelligence and machine learning to analyze geospatial data. By integrating advanced analytics with location intelligence, DeepSpatial helps businesses optimize their operations, enhance decision-making, and drive strategic growth. Their solutions often cater to various industries, including education, retail, and environmental management, enabling organizations to gain insights from complex datasets and visualize trends effectively. With a focus on innovation, DeepSpatial aims to transform how organizations utilize geographic data for impactful outcomes. Key Responsibilities: Kubernetes Administration: Manage and optimize Kubernetes clusters on cloud platforms (Azure preferred). Implement and monitor Kubernetes resources such as Pods, Deployments, Services, Stateful Sets, ConfigMaps, and Secrets. Troubleshoot and resolve Kubernetes-related issues, including networking, storage, and cluster performance problems. Set up and manage Kubernetes monitoring and alerting using tools like Prometheus, Grafana, and the ELK stack. Ensure high availability, security, and scalability of the Kubernetes clusters. Perform regular backups, disaster recovery, and patching of Kubernetes environments. Manage Helm charts for package management and deployments. Azure DevOps Engineering: Design, implement, and optimize Continuous Integration and Continuous Deployment (CI/CD) pipelines using Azure DevOps. Work with development teams to automate infrastructure provisioning and application deployment in cloud and hybrid environments. Integrate and maintain automated testing, security scanning, and code quality checks using SonarQube in the CI/CD pipeline. Manage and optimize cloud infrastructure in Azure, ensuring cost efficiency, high availability, and security. Use ARM templates, or similar Infrastructure-as-Code (IaC) tools to automate infrastructure provisioning and management. Implement monitoring and alerting solutions to ensure the health of DevOps pipelines and services. Work with version control systems such as Git, and manage repositories and branches effectively in Azure Repos. Collaborate with security teams to ensure the compliance of DevOps processes and environments with industry standards and best practices. Collaboration and Support: Work closely with cross-functional teams (development, operations, QA) to provide solutions for automation, configuration management, and deployment workflows. Provide guidance and best practices for Kubernetes usage, Azure DevOps pipelines, and cloud infrastructure management. Actively participate in the improvement of DevOps and cloud infrastructure processes, advocating for automation and streamlined workflows. Other Duties: Stay up to date with the latest advancements in Kubernetes, Azure, and DevOps tools and technologies. Document all operational procedures, best practices, and troubleshooting steps for team reference. Support and manage Kubernetes-based applications across multiple environments (Dev, QA, Prod). Qualifications: Required Skills: Experience with Kubernetes, including cluster administration, deployments, and monitoring. Proficiency with Azure cloud services, including Azure Kubernetes Service (AKS), Azure DevOps, and related tools. Hands-on experience with Azure DevOps pipelines, version control, release management, and automation. Familiarity with Infrastructure-as-Code (IaC) tools like Azure ARM templates, or Bicep. Experience with containerization technologies, primarily Docker and contained. Proficient with Linux/Unix systems and shell scripting (Bash, Python, etc.). Knowledge of CI/CD concepts and best practices. Experience with container orchestration, particularly Helm for Kubernetes deployments. Familiarity with monitoring tools like Prometheus, Grafana, or the ELK stack. Experience with Shell and PowerShell scripting to automate tasks, manage resources, and enhance workflows. Understanding of cloud networking, storage, and security best practices, particularly within Azure environments. Preferred Skills: Azure certifications such as Microsoft Certified: Azure Administrator Associate or Azure DevOps Engineer Expert. Experience with serverless architectures and managed Kubernetes services (e.g., AKS). Familiarity with GitOps principles and tools (e.g., ArgoCD, Flux). Knowledge of security best practices in cloud-native applications, including RBAC, Network Policies, and Pod Security Policies. Education and Experience: Bachelor's degree in computer science, Information Technology, or a related field (or equivalent work experience). Soft Skills: Excellent problem-solving and troubleshooting skills. Strong communication skills, both written and verbal, with the ability to communicate complex technical concepts to non-technical stakeholders. Ability to work independently as well as part of a team. Strong collaboration skills, with a focus on supporting cross-functional teams and delivering customer-driven solutions. Proactive, self-motivated, and eager to learn and improve. Ready to grow with us? Send your resume to Charu.Gautam@deepspatial.ai
Posted 3 weeks ago
4.0 - 7.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job description Job Title: Certified DevOps Engineer – AWS (Urgent) / Microsoft / Oracle / Adobe / Cisco Experience: 4 to 7 Years Location: Ahmedabad Company: GMIndia Pvt. Ltd. About GMIndia Pvt. Ltd.: GMIndia Pvt. Ltd. is an innovation-driven IT solutions company delivering future-ready technology and automation services across industries. As we grow our DevOps capabilities, we are actively hiring certified DevOps Engineers to lead cloud and infrastructure transformation. Immediate priority is given to AWS-certified professionals. Position Overview: We are seeking a results-oriented Certified DevOps Engineer with 4–7 years of experience and valid certifications in AWS, Microsoft Azure, Oracle Cloud, Adobe Experience Cloud, or Cisco DevNet . The role demands strong experience in cloud platforms, infrastructure automation, CI/CD pipelines, and containerized deployments. Key Responsibilities: Design and implement CI/CD pipelines to accelerate development and delivery cycles Automate infrastructure using tools like Terraform, Ansible, or CloudFormation Deploy, manage, and monitor systems across AWS, Azure, Oracle, Cisco, or Adobe Cloud environments Collaborate with cross-functional teams for seamless application delivery and integration Manage containerized deployments using Docker and Kubernetes Implement effective system monitoring, logging, backup, and disaster recovery strategies Ensure infrastructure security and compliance with cloud best practices Required Skills & Certifications: 4 to 7 years of hands-on DevOps/Cloud experience Mandatory: Valid certification in one or more of the following (only certified candidates will be considered): AWS Certified DevOps Engineer / Solutions Architect (High Priority) Microsoft Certified: Azure DevOps Engineer Expert Oracle Cloud Infrastructure (OCI) Certified Adobe Certified Expert (Experience Cloud / AEM) Cisco Certified DevNet Professional or Specialist Proficient in scripting (Python, Bash, PowerShell) Strong knowledge of containerization (Docker) and orchestration (Kubernetes) Experience with monitoring tools like ELK Stack, Grafana, Prometheus, CloudWatch, etc. Familiar with DevSecOps practices and secure deployment standards Preferred Qualifications: Experience with hybrid or multi-cloud deployments Familiarity with GitOps, serverless architecture, or edge computing Exposure to agile development practices and tools (JIRA, Confluence) Why Join GMIndia Pvt. Ltd.? Urgent opportunity for AWS-certified engineers – immediate onboarding Work on challenging, cloud-native projects with cutting-edge technologies Supportive and collaborative work culture Competitive salary and long-term career growth opportunities
Posted 3 weeks ago
0 years
0 Lacs
India
On-site
"Cipher" - Derived from the word Sifr meaning 0 or nothing. "Ops" - Operations At CipherOps, we are building an agentic-first, cursor-style AI platform that streamlines your entire DevOps lifecycle. From provisioning infrastructure as code and managing GitOps deployments, running observability queries, and enforcing security and compliance checks to Incident Management, CoOps unifies your cloud, CI/CD, monitoring, and SRE into a single, intuitive platform with agent chat interface. Mission Empower DevOps teams to accelerate delivery and reduce risk by automating repetitive tasks, surfacing real-time insights, and preserving institutional knowledge - all through a conversational interface that grows with your organization. Vision Become the indispensable AI DevOps assistant for every engineering team - eliminating manual toil, breaking down silos between DevOps and SRE, and providing a unified "single pane of glass" for infrastructure, deployments, incidents, and security so businesses can innovate without limits. Qualifications Front-End Development and Responsive Web Design skills Internship Experience in web development & React Understanding of Software Development principles Strong problem-solving and analytical skills Ability to work well in a team environment Bachelor's degree in Computer Science or relevant field Brownie Points Exposure to Typescript Knowledge of Redux/Redux Toolkit
Posted 3 weeks ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Velotio Technologies is a product engineering company working with innovative startups and enterprises. We are a certified Great Place to Work® and recognized as one of the best companies to work for in India. We have provided full-stack product development for 110+ startups across the globe building products in the cloud-native, data engineering, B2B SaaS, IoT & Machine Learning space. Our team of 400+ elite software engineers solves hard technical problems while transforming customer ideas into successful products. We are seeking a skilled DevOps Engineer - OpenShift & Infrastructure to join our growing team. The ideal candidate will have strong experience in managing Red Hat OpenShift and Kubernetes-based environments, along with expertise in infrastructure automation and Linux system administration. You will play a critical role in ensuring the stability, scalability, and security of our containerized platform and underlying infrastructure. Requirements Manage and administer Red Hat OpenShift 4.x and Kubernetes clusters in production environments. Automate infrastructure provisioning using Terraform, Ansible, and Helm charts. Implement and manage GitOps practices using tools like ArgoCD, FluxCD, or Tekton. Deploy and manage containerized applications using HELM, Operators, and Custom Resource Definitions (CRDs). Monitor and troubleshoot system performance using tools like Prometheus, Grafana, and OpenShift Logging. Configure and manage CI/CD pipelines for seamless deployments. Ensure compliance with security best practices (RBAC, SCCs, PSPs). Collaborate with development and operations teams to implement scalable, secure, and reliable infrastructure solutions. Perform system upgrades, patching, and routine maintenance across platforms. Provide technical support and resolve issues related to OpenShift and infrastructure services. Desired Skills & Experience: 4+ years of experience managing OpenShift and Kubernetes in production. Strong background in Linux system administration (RHEL, CentOS, Ubuntu). Experience with container runtimes: Docker, Podman, CRI-O. Hands-on experience with Infrastructure as Code (IaC) tools: Terraform, Ansible, HELM. Proficiency in scripting with Bash, Python, or PowerShell. Familiarity with Kubernetes networking concepts: Ingress, CNI plugins, Service Mesh (Istio/Linkerd). Experience with CI/CD tools like Jenkins, GitLab CI, Tekton, etc. Strong understanding of RBAC, Security Context Constraints (SCCs), and container security principles. Bonus points if you... Experience with cloud environments (AWS, Azure, or GCP). Familiarity with OpenShift Virtualization (CNV) or Advanced Cluster Management (ACM). Exposure to storage integration in container platforms (Ceph, GlusterFS, EBS). Knowledge of Service Mesh architectures Benefits We have an autonomous and empowered work culture encouraging individuals to take ownership and grow quickly Flat hierarchy with fast decision making and a startup-oriented "get things done" culture A strong, fun & positive environment with regular celebrations of our success. We pride ourselves in creating an inclusive, diverse & authentic environment We want to hire smart, curious, and ambitious folks, so please reach out even if you do not have all of the requisite experience. We are looking for engineers with the potential to grow! At Velotio, we embrace diversity. Inclusion is a priority for us, and we are eager to foster an environment where everyone feels valued. We welcome applications regardless of ethnicity or cultural background, age, gender, nationality, religion, disability or sexual orientation.
Posted 3 weeks ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role**: Openshift MSA Required Technical Skill Set: Openshift MSA Desired Experience Range: 05 - 07 yrs Notice Period: Immediate to 90Days only Location of Requirement: Chennai/Bangalore/Hyderabad/Bhubaneshwar /Kolkata We are currently planning to do a Virtual Interview Job Description: Must-Have** (Ideally should not be more than 3-5) We are looking for an experienced OpenShift Developer with a strong background in Microservices Architecture (MSA) to design, develop, deploy, and manage containerized applications. The ideal candidate will have in-depth knowledge of Kubernetes/OpenShift platforms and hands-on experience building and deploying microservices in enterprise environments. Hands-on experience with Red Hat OpenShift (v4.x preferred) and Kubernetes. Strong knowledge of microservices design patterns, RESTful APIs, and container orchestration. Proficient in at least one backend language (Java, Go, Node.js, or Python). Experience with Docker, Helm charts, and Kubernetes Operators. Familiarity with GitOps, CI/CD pipelines (Jenkins, Tekton), and container registries Good-to-Have 5–8 years of experience in IT Good aptitude, logical reasoning, and analytical thinking skills. Good written and verbal communication skills.
Posted 3 weeks ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Experience : 5+ Years Work Mode : Work from office only Job Description: 1. AWS Cloud Infrastructure Design, deploy, and manage scalable, secure, and highly available systems on AWS. Optimize cloud costs, enforce tagging, and implement security best practices (IAM, VPC, GuardDuty, etc.). Automate infrastructure provisioning using Terraform or AWS CDK. Ensure backup, disaster recovery, and high availability (HA) strategies are in place. 2. Kubernetes (EKS preferred) Manage and scale Kubernetes clusters (preferably Amazon EKS). Implement CI/CD pipelines with GitOps (e.g., ArgoCD or Flux) or traditional tools (e.g., Jenkins, GitLab). Enforce RBAC policies, namespaces isolation, and pod security policies. Monitor cluster health, optimize pod scheduling, autoscaling, and resource limits/requests. 3. Monitoring and Observability (Datadog) Build and maintain Datadog dashboards for real-time visibility across systems and services. Set up alerting policies, SLOs, SLIs, and incident response workflows. Integrate Datadog with AWS, Kubernetes, and applications for full-stack observability. Conduct post-incident reviews using Datadog analytics to reduce MTTR. 4. Automation and DevOps Automate manual processes (e.g., server setup, patching, scaling) using Python, Bash, or Ansible. Maintain and improve CI/CD pipelines (Jenkins) for faster and more reliable deployments. Drive Infrastructure-as-Code (IaC) practices using Terraform to manage cloud resources. Promote GitOps and version-controlled deployments. 5. Linux Systems Administration Administer Linux servers (Ubuntu, RHEL, Amazon Linux) for stability and performance. Harden OS security, configure SELinux, firewalls, and ensure timely patching. Troubleshoot system-level issues: disk, memory, network, and processes. Optimize system performance using tools like top, htop, iotop, netstat, etc.
Posted 3 weeks ago
3.0 - 5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Rystad Energy is a leading global independent research and energy intelligence company dedicated to helping clients navigate the future of energy. By providing high-quality data and thought leadership, our international team empowers businesses, governments and organizations to make well-informed decisions. Our extensive portfolio of products and solutions covers all aspects of global energy fundamentals, spanning every corner of the oil and gas industry, renewables, clean technologies, supply chain and power markets. Headquartered in Oslo, Norway, with an expansive global network, our data, analysis, advisory and education services provide clients a competitive edge in the market. For more information, visit www.rystadenergy.com . Role We are looking for a DevOps Engineer to join our growing platform team in Bangalore. In this role, you'll support the development and operation of CI/CD pipelines, help maintain and improve our cloud infrastructure, and work on tools that enhance our development and deployment processes. If you're eager to grow your expertise in cloud technologies, automation, and infrastructure management while working in a collaborative global environment, we'd be excited to hear from you. Requirements Required Skills Strong written and spoken English Familiarity with Infrastructure as Code (e.g., Terraform, Ansible) Hands-on experience with at least one major cloud platform (Azure preferred) Understanding of containerization (Docker) and orchestration (Kubernetes basics) Basic scripting abilities (Bash, PowerShell, or Python) Exposure to CI/CD tools and version control systems (e.g., Azure DevOps, Bitbucket) Enthusiasm for automation, troubleshooting, and process improvement Ability to work both independently and as part of a collaborative team Preferred Skills Hands-on experience with Azure or other cloud environments Exposure to Kubernetes, Helm, and GitOps tools like ArgoCD or Flux Experience with monitoring tools (Grafana, Prometheus) Understanding of networking, system performance, and security best practices Basic knowledge of databases (MSSQL, PostgreSQL, or similar) Willingness to learn and take ownership of tasks in a dynamic environment Responsibilities Assist in building and maintaining CI/CD pipelines for smooth and automated deployments Contribute to the development and enhancement of our Internal Developer Platform (IDP) Support and manage cloud-based and on-premise infrastructure components Troubleshoot issues related to infrastructure, deployments, and application performance Participate in monitoring, alerting, and incident response processes Collaborate with senior engineers and development teams to adopt DevOps and SRE best practices Document infrastructure processes, playbooks, and configuration changes Stay current with new tools and technologies in the DevOps ecosystem Qualifications Bachelor's degree in computer science, Engineering, or a related technical field (preferred but not mandatory) 3-5 years of total professional experience, with 2-3 years in a DevOps or related infrastructure-focused role Benefits Lean, flat, non-hierarchical structure that will enable you to impact products and workflows A diverse, inclusive, meritocratic culture Community driven with the desire to create and share to have an impact globally Keen to challenge your skills and deepen existing expertise while learning new ones A global and quickly expanding international business culture with more than 80 nationalities Inclusive and supportive working environment with a focus on a culture of sharing Opportunity to join a globally leading energy knowledge house Flexible work environment
Posted 3 weeks ago
5.0 years
0 Lacs
Delhi, Delhi
On-site
Customer Solutions New Delhi, India R03312 Description We’re not just building better tech. We’re rewriting how data moves and what the world can do with it. With Confluent, data doesn’t sit still. Our platform puts information in motion, streaming in near real-time so companies can react faster, build smarter, and deliver experiences as dynamic as the world around them. It takes a certain kind of person to join this team. Those who ask hard questions, give honest feedback, and show up for each other. No egos, no solo acts. Just smart, curious humans pushing toward something bigger, together. One Confluent. One Team. One Data Streaming Platform. About the Role: Solutions Engineers at Confluent drive not only the early-stage evaluation within the sales process, but also play a crucial role in enabling ongoing value-realization for customers, all while helping them move up the adoption maturity curve. In this role you’ll partner with Account Executives to be the key technical advisor in service of the customer. You’ll be instrumental in surfacing the customers’ stated or implicit Business Needs, and coming up with Technical Designs to best meet these needs. You may find yourself at times facilitating art of the possible discussions and storytelling to inspire customers in adopting new patterns with confidence, and at other times driving creative solutioning to help get past difficult technical roadblocks. Overall, we look upon Solutions Engineers to be a key cog within the Customer Success Team that help foster an environment of sustained success for the customer and incremental adoption of Confluent’s Technology. What You Will Do: Help advance new & innovative data streaming use-cases from conception to go-live Execute on and lead technical proof of concepts Conduct discovery & whiteboard Sessions to develop new use-cases Provide thought Leadership by delivering technical talks and workshops Guide customers with hands-on help and best practice to drive operational maturity of their Confluent deployment Analyze customer consumption trends and identify optimization opportunities Work closely with product and engineering teams, and serve as a key product advocate across the customer, partner and Industry ecosystem Forge strong relationships with key customer stakeholders and serve as a dependable partner for them What You Will Bring: 5+ years of Sales/Pre-Sales/Solutions Engineering or similar customer facing experience in the software sales or implementation space Experience with event-driven architecture, data integration & processing techniques, database & data warehouse technologies, or related fields First-Hand exposure to cloud architecture, migrations, deployment & application development Experience with DevOps/Automation, GitOps or Kubernetes Ability to read & write Java, Python or SQL Clear, consistent demonstration of self-starter behavior, a desire to learn new things and tackle hard technical problems Exceptional presentation and communications capabilities. Confidence presenting to a highly skilled and experienced audience, ranging from developers to enterprise architects and up to C-level executives What Gives You an Edge: Technical certifications - cloud developer/architect, data engineering & integration Familiarity with solution or value Selling A challenger mindset and an ability to positively influence peoples’ opinions Ready to build what's next? Let’s get in motion. Come As You Are Belonging isn’t a perk here. It’s the baseline. We work across time zones and backgrounds, knowing the best ideas come from different perspectives. And we make space for everyone to lead, grow, and challenge what’s possible. We’re proud to be an equal opportunity workplace. Employment decisions are based on job-related criteria, without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, or any other classification protected by law.
Posted 3 weeks ago
10.0 years
0 Lacs
Delhi, India
Remote
We are seeking a highly experienced and customer-focused Presales Architect to join our Solution Engineering team. The ideal candidate will have a strong background in AWS IaaS, PaaS, and SaaS services , deep expertise in cloud architecture , and solid exposure to data platforms , including Amazon Redshift , AI/ML workloads , and modern data architectures . Familiarity with Azure and Google Cloud Platform (GCP) is a strong advantage. This role is a strategic blend of technical solutioning , customer engagement , and sales support , playing a critical role in the pre-sales cycle by understanding customer requirements, designing innovative solutions, and aligning them with the company’s service offerings. Key Responsibilities: Pre-Sales and Solutioning: Engage with enterprise customers to understand their technical requirements and business objectives. Architect end-to-end cloud solutions on AWS , covering compute, storage, networking, DevOps, and security. Develop compelling solution proposals, high-level designs, and reference architectures that address customer needs. Support RFI/RFP responses, create technical documentation, and deliver presentations and demos to technical and non-technical audiences. Collaborate with Sales, Delivery, and Product teams to ensure alignment of proposed solutions with client expectations. Conduct technical workshops, proof of concepts (PoCs), and technical validations. Technical Expertise: Deep hands-on knowledge and architecture experience with AWS services : IaaS: EC2, VPC, S3, EBS, ELB, Auto Scaling, etc. PaaS: RDS, Lambda, API Gateway, Fargate, DynamoDB, Aurora, Step Functions. SaaS & Security: AWS Organizations, IAM, AWS WAF, CloudTrail, GuardDuty. Understanding of multi-cloud strategies ; exposure to Azure and GCP cloud services including hybrid architectures is a plus. Strong knowledge of DevOps practices and tools like Terraform, CloudFormation, Jenkins, GitOps, etc. Proficiency in architecting solutions that meet scalability , availability , and security requirements. Data Platform & AI/ML: Experience in designing data lakes , data pipelines , and analytics platforms on AWS. Hands-on expertise in Amazon Redshift , Athena , Glue , EMR , Kinesis , and S3-based architectures . Familiarity with AI/ML solutions using SageMaker , AWS Comprehend , or other ML frameworks. Understanding of data governance , data cataloging , and security best practices for analytics workloads. Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field. 10+ years of experience in IT, with 5+ years in cloud architecture and pre-sales roles. AWS Certified Solutions Architect – Professional (or equivalent certification) is preferred. Strong presentation skills and experience interacting with CXOs, Architects, and DevOps teams. Ability to translate technical concepts into business value propositions. Excellent communication, proposal writing, and stakeholder management skills. Nice to Have: Experience with Azure (e.g., Synapse, AKS, Azure ML) or GCP (e.g., BigQuery, Vertex AI) . Familiarity with industry-specific solutions (e.g., fintech, healthcare, retail cloud transformations). Exposure to AI/ML MLOps pipelines and orchestration tools like Kubeflow , MLflow , or Airflow .
Posted 3 weeks ago
5.0 years
10 - 30 Lacs
Chennai, Tamil Nadu, India
On-site
About The Opportunity A fast-growing player in the Enterprise Cloud & Data Engineering sector, we modernise legacy platforms and build Azure-native analytics solutions for retail, BFSI, and manufacturing clients worldwide. Leveraging advanced DevSecOps, Data Lake, and Spark frameworks, our teams enable clients to unlock real-time insights, reduce TCO, and accelerate digital transformation. Role & Responsibilities Lead end-to-end architecture of Azure Data Lake and Delta pipelines, unifying ingestion, processing, and serving layers. Design and implement high-performance PySpark workloads on Azure Databricks, optimising cost and cluster efficiency. Build .NET Core microservices and APIs that orchestrate data movement, governance, and metadata management. Define best-practice blueprints covering CI/CD, GitOps, monitoring, and security for production data platforms. Collaborate with product owners, data scientists, and infra teams to translate business needs into scalable data solutions. Mentor engineers, conduct design reviews, and ensure adherence to cloud architecture standards. Skills & Qualifications Must-Have 5+ years data engineering with at least 3 years architecting Azure analytics platforms. Expertise in Azure Databricks, Data Lake Storage Gen2, Delta Lake, and Synapse. Hands-on PySpark coding, optimisation, and job orchestration. Proficient in .NET Core for back-end services and RESTful API design. Strong grasp of CI/CD, IaC with Terraform or Bicep, and security best practices. Preferred Experience with Azure Purview, Data Factory Mapping Data Flows, and event-driven architectures. Exposure to real-time analytics using Kafka or Event Hubs and Structured Streaming. Certification Microsoft Azure Data Engineer or Solutions Architect. Benefits & Culture Highlights Greenfield projects on latest Azure tech, rapid career growth. On-site innovation labs, hackathons, and cross-functional guilds. Competitive salary, wellness cover, and learning sponsorships. Work Location On-site across major delivery centres in India. Skills: structured streaming,gitops,sql,ci/cd,event hubs,terraform,kafka,delta pipelines,azure databricks,data factory mapping data flows,.net core,security best practices,devops,restful api design,pyspark,azure data lake,monitoring,bicep,azure,azure purview,api design
Posted 4 weeks ago
5.0 years
10 - 30 Lacs
Greater Kolkata Area
On-site
About The Opportunity A fast-growing player in the Enterprise Cloud & Data Engineering sector, we modernise legacy platforms and build Azure-native analytics solutions for retail, BFSI, and manufacturing clients worldwide. Leveraging advanced DevSecOps, Data Lake, and Spark frameworks, our teams enable clients to unlock real-time insights, reduce TCO, and accelerate digital transformation. Role & Responsibilities Lead end-to-end architecture of Azure Data Lake and Delta pipelines, unifying ingestion, processing, and serving layers. Design and implement high-performance PySpark workloads on Azure Databricks, optimising cost and cluster efficiency. Build .NET Core microservices and APIs that orchestrate data movement, governance, and metadata management. Define best-practice blueprints covering CI/CD, GitOps, monitoring, and security for production data platforms. Collaborate with product owners, data scientists, and infra teams to translate business needs into scalable data solutions. Mentor engineers, conduct design reviews, and ensure adherence to cloud architecture standards. Skills & Qualifications Must-Have 5+ years data engineering with at least 3 years architecting Azure analytics platforms. Expertise in Azure Databricks, Data Lake Storage Gen2, Delta Lake, and Synapse. Hands-on PySpark coding, optimisation, and job orchestration. Proficient in .NET Core for back-end services and RESTful API design. Strong grasp of CI/CD, IaC with Terraform or Bicep, and security best practices. Preferred Experience with Azure Purview, Data Factory Mapping Data Flows, and event-driven architectures. Exposure to real-time analytics using Kafka or Event Hubs and Structured Streaming. Certification Microsoft Azure Data Engineer or Solutions Architect. Benefits & Culture Highlights Greenfield projects on latest Azure tech, rapid career growth. On-site innovation labs, hackathons, and cross-functional guilds. Competitive salary, wellness cover, and learning sponsorships. Work Location On-site across major delivery centres in India. Skills: structured streaming,gitops,sql,ci/cd,event hubs,terraform,kafka,delta pipelines,azure databricks,data factory mapping data flows,.net core,security best practices,devops,restful api design,pyspark,azure data lake,monitoring,bicep,azure,azure purview,api design
Posted 4 weeks ago
5.0 years
10 - 30 Lacs
Pune, Maharashtra, India
On-site
About The Opportunity A fast-growing player in the Enterprise Cloud & Data Engineering sector, we modernise legacy platforms and build Azure-native analytics solutions for retail, BFSI, and manufacturing clients worldwide. Leveraging advanced DevSecOps, Data Lake, and Spark frameworks, our teams enable clients to unlock real-time insights, reduce TCO, and accelerate digital transformation. Role & Responsibilities Lead end-to-end architecture of Azure Data Lake and Delta pipelines, unifying ingestion, processing, and serving layers. Design and implement high-performance PySpark workloads on Azure Databricks, optimising cost and cluster efficiency. Build .NET Core microservices and APIs that orchestrate data movement, governance, and metadata management. Define best-practice blueprints covering CI/CD, GitOps, monitoring, and security for production data platforms. Collaborate with product owners, data scientists, and infra teams to translate business needs into scalable data solutions. Mentor engineers, conduct design reviews, and ensure adherence to cloud architecture standards. Skills & Qualifications Must-Have 5+ years data engineering with at least 3 years architecting Azure analytics platforms. Expertise in Azure Databricks, Data Lake Storage Gen2, Delta Lake, and Synapse. Hands-on PySpark coding, optimisation, and job orchestration. Proficient in .NET Core for back-end services and RESTful API design. Strong grasp of CI/CD, IaC with Terraform or Bicep, and security best practices. Preferred Experience with Azure Purview, Data Factory Mapping Data Flows, and event-driven architectures. Exposure to real-time analytics using Kafka or Event Hubs and Structured Streaming. Certification Microsoft Azure Data Engineer or Solutions Architect. Benefits & Culture Highlights Greenfield projects on latest Azure tech, rapid career growth. On-site innovation labs, hackathons, and cross-functional guilds. Competitive salary, wellness cover, and learning sponsorships. Work Location On-site across major delivery centres in India. Skills: structured streaming,gitops,sql,ci/cd,event hubs,terraform,kafka,delta pipelines,azure databricks,data factory mapping data flows,.net core,security best practices,devops,restful api design,pyspark,azure data lake,monitoring,bicep,azure,azure purview,api design
Posted 4 weeks ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Overview Job Purpose At ICE, we never rest. We are on a mission as a team. We are problem solvers and partners, always starting with our customers to solve their challenges and create opportunities. Our start-up roots keep us nimble, flexible, and moving fast. We take ownership and make decisions. We all work for one company and work together to drive growth across the business. We engage in robust debates to find the best path, and then we move forward as one team. We take pride in what we do, acting with integrity and passion, so that our customers can perform better. We are experts and enthusiasts - combining ever-expanding knowledge with leading technology to consistently deliver results, solutions and opportunities for our customers and stakeholders. Every day we work toward transforming global markets. The Manager, Systems Engineering is responsible for guiding and supporting a team of engineers in creating and maintaining the core automation services within ICE’s infrastructure. This role involves setting clear objectives, providing resources, and ensuring that the team adheres to high standards for automated tooling. A successful manager will oversee the software development lifecycle, establish best practices, and foster a collaborative environment where engineers can excel. Key responsibilities include mentoring team members, providing constructive feedback, and facilitating professional development. Additionally, the manager will drive root cause analysis discussions, recommend and approve automation tools, and actively participate in strategic technical decisions to align team efforts with business goals. Responsibilities Assist in the design, planning and implementation of solutions using Python programming language Provide education and mentorship to team members, operations staff, and other departments on best practices, automation methodologies. Delegate, assign, and ensure work is completed by subordinate staff Assist in the design, planning and implementation of server and automation solutions Tune and design systems infrastructure for maximum available performance Automation of manual tasks using scripting development Oversee the development and maintenance of automation scripts in Ansible and Python, ensuring the delivery of reusable, testable, and efficient code. Collaborate with cross-functional teams to facilitate the development and upkeep of RESTful APIs while ensuring adherence to best practices. Mentor and coach junior developers to enhance their skills and knowledge. Facilitate productive code reviews, design reviews, and architecture discussions. Oversee the analysis, programming, and modification of software enhancement requests. Identify and resolve complex software development challenges promptly, providing technical guidance to the team. Collaborate with internal teams to understand business and functional requirements to ensure automation processes meet organizational needs. Guide the team in using various architectures, tools, and frameworks to automate internal processes. Actively participate in technical analysis, problem resolution, and proposing solutions. Coordinate efforts across developers, operations staff, and release engineers, fostering a service-oriented team environment. Manage on-call rotations to ensure efficient after-hours support. Lead the team to ensure robust support for production operations in a 24/7 environment, driving technical excellence to meet organizational goals. Knowledge And Experience 5+ years of experience with engineering Operating Systems as well as Software Development Engineering, Tools Automation, or similar role in platform delivery Experience as a people manager or in a team lead role with delegation duties Degree in engineering discipline or equivalent experience in Systems Engineering / Development Solid experience coding with any one or combination of PowerShell, Python, Ruby, etc Fundamental understanding of the SDLC processes, and tools (GIT, Puppet, etc.) Experience with automation/configuration management using either Puppet, Chef, Ansible or equivalent Working knowledge of multi-tiered, highly available, and resilient application design Working knowledge of horizontal and vertical scaling for performance and high availability Top-tier critical thinking, analytics, and problem-solving skills Ability to work in a service-oriented team environment Strong understanding of project Management, organization, and time management Customer focus and dedication to the best possible user experience Ability to communicate effectively with technical or business resources Understanding of Continuous Integration and Delivery concepts Fluent speaking, reading, and writing in English Desired Knowledge And Experience Experience working in a GitOps organization with a drive to automate everything Working knowledge of the creation, support and deployment of Docker Containers Working knowledge of the setup and configuration of Kubernetes 2+ years of experience in Ansible code development
Posted 4 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough