Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 7.0 years
10 - 15 Lacs
Bengaluru
Work from Office
Join our Team About this opportunity: We are seeking a highly motivated and detail-oriented Experienced Cloud Engineer to join our dynamic software DevOps team You should be a curious professional, eager to grow, and an excellent team player! As a Cloud Engineer, you will work closely with our r-Apps DevOps team to gain exposure to cloud native infrastructure, automation, and optimization tasks You will support the implementation and maintenance of CI-CDD, Deployments, helm, Security aspects of cloud native applications/environments, assist with troubleshooting and contribute to the SaaS/AaaS based Microservice solutions development team, What you will do: AWS Cloud: Experience with AWS Cloud pipelines and AWS CloudFormation (IaC), Kubernetes & Helm: Kubernetes administration & Cloud native application packaging/management using Helm charts, CI-CDD: Design and implement CI-CDD using Jenkins & spinnaker Automation & Scripting: Develop and maintain scripts to automate routine tasks using technologies such as Ansible, Python, and Shell scripting, Monitoring & Optimization: Monitor microservice resources for performance, availability Assist in optimizing environments to enhance performance, Troubleshooting: Troubleshoot and resolve issues within AaaS applications, focusing on resource failures, performance degradation, and connectivity disruptions, Documentation: Assist in documenting DevOps infrastructure setups, processes, and workflows, and help maintain knowledge base articles, Learning & Development: Continuously expand your knowledge of cloud technologies and cloud architecture, stay updated on the latest trends in cloud computing, You will bring: Bachelor/ masters degree in computer science, Software Engineering, or related field Experience of cloud platforms like AWS, Proficiency in containerization and orchestration using Docker and Kubernetes, Proficient in using Helm for managing Kubernetes applications, including creating and deploying Helm charts, Experience in CICD tools like Jenkins, Spinnaker, Gitlab, Experience with monitoring tools such as Prometheus, Grafana, Implement and manage security tools for CI/CD pipelines, cloud environments, and containerized applications, Experience of scripting and automation (e-g , Python, Bash, Ansible), Strong problem-solving skills and the ability to troubleshoot cloud native infrastructure, Good communication skills and the ability to work effectively in a team environment, Eagerness to learn new technologies and contribute to cloud native applications, Understanding of the software development lifecycle (SDLC) and agile methodologies Preferred qualifications: Certifications / Hands-on experience with AWS, Exposure to AI services for DevOps, Predictive analysis on Monitoring of AaaS applications, Design and enforce security best practices across the entire DevOps lifecycle, Familiarity with industry security standards and frameworks (e-g , CIS, NIST, OWASP), Why join Ericsson At Ericsson, you?ll have an outstanding opportunity The chance to use your skills and imagination to push the boundaries of what?s possible To build solutions never seen before to some of the worlds toughest problems You?ll be challenged, but you wont be alone You?ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next, What happens once you apply Click Here to find all you need to know about what our typical hiring process looks like, Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team Ericsson is proud to be an Equal Opportunity Employer learn more, Primary country and city: India (IN) || Bangalore Req ID: 767284
Posted 1 week ago
3.0 - 5.0 years
0 - 0 Lacs
Pune
Work from Office
We are looking for a Senior Linux Support Specialist to take full ownership of hybrid infrastructure environments hosted across AWS, Azure, and On-Premises setups. The ideal candidate will play a critical role in ensuring system stability, security, and performance while driving automation and standardization across 100s of Linux servers. This is a hands-on technical role requiring deep expertise in Linux, security hardening (CIS benchmarks), vulnerability remediation, and automation of infrastructure tasks. Key Responsibilities: Linux Server Management & Operations Manage, monitor, and support large-scale Linux environments (RHEL, CentOS, Ubuntu, etc.) Perform OS upgrades, patching, and package management across hundreds of servers Troubleshoot and resolve advanced Linux system issues (performance, kernel, services, etc.) Security Hardening & Compliance Implement and maintain CIS hardening standards across all Linux servers Remediate VAPT (Vulnerability Assessment and Penetration Testing) and CIS benchmark findings Develop automation scripts/tools to roll out security configurations across the fleet Work closely with the security team to ensure system compliance with industry best practices Automation & Configuration Management Automate OS hardening, patch management, and system provisioning using tools like Ansible, Bash, Python, or Terraform Create and maintain playbooks and scripts for repeatable tasks Streamline deployments and configuration drifts across cloud and on-prem environments Cloud & On-Premise Support Support hybrid environments on AWS, Azure, and On-Prem Assist in provisioning, scaling, and securing cloud-based Linux workloads Monitor platform uptime, availability, and performance metrics Cost & Resource Optimization Collaborate with DevOps/cloud teams to optimize cloud usage and reduce infrastructure costs Implement monitoring and alerting to proactively identify performance or cost anomalies Skills & Qualifications: Must-Have Skills: 3+ years of hands-on experience with Linux system administration Deep understanding of CIS benchmarks and security hardening techniques Strong scripting skills (Bash, Python, etc.) Proven experience with Ansible or similar configuration management tools Solid knowledge of AWS and Azure Linux instances and best practices Experience in managing vulnerability remediation and patch management Familiarity with VAPT assessments , security tools, and remediation workflows Good to Have: Experience with container technologies (Docker, Kubernetes) Infrastructure as Code (Terraform, CloudFormation) Monitoring tools (Prometheus, Nagios, CloudWatch, etc.) Certification in RHCE, AWS SysOps, Azure Administrator, or related areas
Posted 1 week ago
3.0 - 6.0 years
7 - 11 Lacs
Thiruvananthapuram
Work from Office
The ideal candidate should be a highly skilled Production Support Engineer with at least 3 years of relevant experience, with a strong focus on ETL and Data Warehouse The candidate should have good understanding of DevOps practices and ITIL concepts, You will Monitor the daily data pipeline runs, ensure timely data loads by proactively identifying and troubleshooting issues, Perform RCA to identify the underlying causes of issues in the data warehouse and ETL pipelines, Document findings and implement corrective actions to prevent recurrence, Collaborate with various teams, including data engineers, DevOps Engineers, architects, and business analysts, to resolve issues and implement improvements, Communicate effectively with stakeholders to provide updates on issue resolution and system performance, Maintain detailed documentation of data warehouse configurations, ETL processes, operational procedures, and issue resolutions, Participate in an on-call rotation and operating effectively in a global 24 7 environment, Ensure data integrity and accuracy and take actions to resolve data discrepancies, Generate regular reports on system performance, issues, and resolutions, Your Skills Strong experience with Oracle databases and AWS cloud services, Proficiency in SQL and PL/SQL, Should be familiar with monitoring tools Dynatreace, CloudWatch, etc Familiarity with other AWS services such as Account creation, VPC, Cloud Front, IAM, ALB, EC2, RDS, Route 53, Auto scaling, Lambda, etc Experience with ETL tools and processes (e-g , Informatica, Talend, AWS Glue), Familiarity with scripting languages (e-g , Python, Shell scripting), Familiarity with DevOps tools and practices (e-g , GitHub, Jenkins, Docker, Kubernetes), Strong analytical and problem-solving abilities, Experience in performing root cause analysis and implementing corrective actions, Ability to work independently as well as in a collaborative team environment, Excellent written and verbal communication skills, Bachelors degree in computer science, Information Technology, or a related field, Minimum of 3 years of experience in a support engineer role, preferably in data warehousing and ETL environments, Certification in AWS, Oracle, or relevant DevOps tools is a plus, Your benefits: We offer a hybrid work model which recognizes the value of striking a balance between in-person collaboration and remote working incl up to 25 days per year working from abroad, We believe in rewarding performance and our compensation and benefits package includes a company bonus scheme, pension, employee shares program and multiple employee discounts (details vary by location) From career development and digital learning programs to international career mobility, we offer lifelong learning for our employees worldwide and an environment where innovation, delivery and empowerment are fostered, Flexible working, health and wellbeing offers (including healthcare and parental leave benefits) support to balance family and career and help our people return from career breaks with experience that nothing else can teach, About Allianz Technology Allianz Technology is the global IT service provider for Allianz and delivers IT solutions that drive the digitalization of the Group With more than 13,000 employees located in 22 countries around the globe, Allianz Technology works together with other Allianz entities in pioneering the digitalization of the financial services industry, We oversee the full digitalization spectrum from one of the industrys largest IT infrastructure projects that includes data centers, networking and security, to application platforms that span from workplace services to digital interaction In short, we deliver full-scale, end-to-end IT solutions for Allianz in the digital age, D&I statement Allianz Technology is proud to be an equal opportunity employer encouraging diversity in the working environment We are interested in your strengths and experience We welcome all applications from all people regardless of gender identity and/or expression, sexual orientation, race or ethnicity, age, nationality Allianz Group is one of the most trusted insurance and asset management companies in the world Caring for our employees, their ambitions, dreams and challenges, is what makes us a unique employer Together we can build an environment where everyone feels empowered and has the confidence to explore, to grow and to shape a better future for our customers and the world around us, We at Allianz believe in a diverse and inclusive workforce and are proud to be an equal opportunity employer We encourage you to bring your whole self to work, no matter where you are from, what you look like, who you love or what you believe in, We therefore welcome applications regardless of ethnicity or cultural background, age, gender, nationality, religion, disability or sexual orientation, Join us Let's care for tomorrow,
Posted 1 week ago
5.0 - 10.0 years
6 - 10 Lacs
Bengaluru
Work from Office
As a Senior DevOps Engineer with a strong background in Azure, you will join the Data & AI Solutions - Engineering team in our Healthcare R&D business. Your expertise will enhance cloud-based platforms in our D&A Landscape using AWS Cloud Services and Azure AD, supporting our R&D efforts in drug discovery and development. You will bridge software development, quality assurance, and IT operations, ensuring our platform is reliable, scalable, and automated. Your expertise will contribute to accelerate deployment cycles and minimize downtime. Key responsibilities include deploying new releases, maintaining Azure AD for identity management, and collaborating with cross-functional teams to advocate for DevOps best practices and guide architectural decisions. Join a multicultural team working in agile methodologies with high autonomy. The role requires office presence at our Bangalore location. Who You Are: University degree in Computer Science, Engineering, or a related field 5+ years of experience applying DevOps in solution development & delivery Proficiency in Azure DevOps incl. project configurations, repositories, pipelines, and environments Proficiency in Azure AD incl. apps registration, and authentication flows (OBO, Client Credentials) Good understanding of AWS services and cloud system design Strong experience in observability practices, including logging, monitoring, and tracing using tools like Prometheus, Grafana, ELK Stack, or AWS-native solutions. Proficiency in Infrastructure as Code (IaC) using Terraform and AWS CloudFormation for automated and repeatable cloud infrastructure deployments. Experience with configuration management tools such as Ansible for automating service provisioning, configuration and management. Knowledge of security best practices in DevOps, including secrets management, role-based access control (RBAC), and compliance frameworks Strong scripting and automation skills using Python for developing of automations and integration workflows Willingness to work in a multinational environment and cross-functional teams distributed between US, Europe (mostly, Germany) and India Sense of accountability and ownership, fast learner Fluency in English & excellent communication skills for technical and non-technical stakeholders
Posted 1 week ago
3.0 - 8.0 years
3 - 8 Lacs
Tirupati
Work from Office
Job Summary: We are seeking a proactive and technically capable Level 1 Support Engineer to join our team. You will be the first line of defence in ensuring smooth application operations, resolving technical issues, and coordinating effectively with internal teams and stakeholders. This role demands strong technical troubleshooting skills, particularly in cloud-based web applications . Key Responsibilities: Monitor and triage incoming incidents, alerts, and tickets from various monitoring and support systems (e.g., CloudWatch, New Relic, Zendesk). Troubleshoot and resolve issues related to: PHP applications and frameworks Database systems (MySQL/PostgreSQL via RDS) AWS infrastructure (EC2, RDS, S3, Step Functions, Batch Jobs) Linux-based servers (Ubuntu/CentOS) FTP/SFTP-based file transfers Work with APIs, tokens, and logs. Provide basic support for API integrations. Clearly document issues, actions taken, and resolutions in the ticketing system. Collaborate with developers, QA, and product teams to escalate and resolve complex issues. Maintain excellent communication with stakeholders during incidents. Required Skills and Experience: 3 Years of experience with AWS, PHP, API Integrations and Cloud Watch. Hands-on experience with PHP and Laravel debugging. Proficiency in Linux environments (Ubuntu/CentOS). Experience with AWS services. Experience using monitoring tools (CloudWatch, New Relic) Knowledge of RESTful APIs and common mobile backend integration issues Experience with support ticketing systems (e.g., Jira, Zendesk, Service Now) Bachelors degree in computer science, Information Technology, or a related field (or equivalent experience). Strong written and verbal communication. Preferred Skills (Nice to Have): Basic scripting (Bash or Python) Understanding of CI/CD pipelines (e.g., Jenkins, GitHub Actions). Prior experience as an Application Support Engineer is a plus. Why Join Us? Be part of a collaborative and agile team driving cutting-edge AI and data engineering solutions. Work on impactful projects that make a difference across industries. Opportunities for professional growth and continuous learning.
Posted 1 week ago
7.0 - 12.0 years
20 - 35 Lacs
Gurugram
Work from Office
Qualification: B.Tech Timings: 9 am to 6 pm Mon & Fri (WFH) Tue/Wed/Thu (WFO) Job Overview: We are seeking an experienced Java Lead with over 7 years of hands-on experience in Java development, who will take ownership of designing and building scalable logging solutions. The ideal candidate should possess strong knowledge of partitioning, data sharding, and database management (both SQL and NoSQL) and should be well-versed in AWS cloud services. This is a critical role where you will lead a team to build reliable and efficient systems while ensuring high performance and scalability. Key Responsibilities: Lead Java Development: Architect, design, and implement backend services using Java, ensuring high performance, scalability, and reliability. Logging Solutions: Build and maintain robust logging solutions that can handle large-scale data while ensuring efficient retrieval and storage. Database Expertise:Implement partitioning, data sharding techniques, and optimize the use of SQL (MySQL, PostgreSQL) and NoSQL databases (MongoDB, DynamoDB). Ensure database performance tuning, query optimization, and data integrity. Cloud Deployment: Utilize AWS cloud services such as EC2, RDS, S3, Lambda, and CloudWatch to design scalable, secure, and high-availability solutions. Manage cloud-based infrastructure and deployments to ensure seamless operations. Collaboration & Leadership: Lead and mentor a team of engineers, providing technical guidance and enforcing best practices in coding, performance optimization, and design. Collaborate with cross-functional teams including product management, DevOps, and QA to ensure seamless integration and deployment of features. Performance Monitoring: Implement solutions for monitoring and ensuring the health of the system in production environments. Innovation & Optimization: Continuously improve system architecture to enhance performance, scalability, and reliability. Required Skills & Qualifications: Education: Bachelors or Masters degree in Computer Science, Information Technology, or related fields. Experience: 7+ years of hands-on experience in Java (J2EE/Spring/Hibernate) development. Database Skills: Strong experience with both SQL (MySQL, PostgreSQL) and NoSQL databases (MongoDB, Cassandra, DynamoDB). Proficiency in partitioning and data sharding. AWS Expertise: Deep understanding of AWS cloud services including EC2, S3, RDS, CloudWatch, and Lambda. Hands-on experience in deploying and managing applications on AWS. Logging and Monitoring: Experience in building and managing large-scale logging solutions (e.g., ELK stack, CloudWatch Logs). Leadership: Proven track record of leading teams, mentoring junior engineers, and handling large-scale, complex projects. Problem-Solving: Strong analytical and problem-solving skills, with the ability to debug and troubleshoot in large, complex systems. Soft Skills: Excellent communication, leadership, and teamwork skills. Ability to work in a fast-paced, dynamic environment. Preferred Qualifications: Experience with containerization technologies like Docker and orchestration tools like Kubernetes. Familiarity with microservices architecture and event-driven systems. Knowledge of CI/CD pipelines and DevOps practices.
Posted 1 week ago
8.0 - 11.0 years
35 - 37 Lacs
Kolkata, Ahmedabad, Bengaluru
Work from Office
Dear Candidate, We are hiring a Software Solutions Architect to design end-to-end solutions that meet business and technical requirements. Perfect for engineers who blend technical depth with system-level thinking. Key Responsibilities: Define software architecture across web, mobile, and backend services Translate business needs into scalable technical designs Evaluate tools, frameworks, and technologies for best fit Collaborate with developers and stakeholders across the SDLC Required Skills & Qualifications: Experience in designing distributed systems, APIs, and databases Strong background in software engineering with multiple languages (Java, .NET, Python, etc.) Familiarity with integration patterns, cloud services, and DevOps practices Bonus: Knowledge of domain-driven design (DDD) and enterprise architecture frameworks Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Reddy Delivery Manager Integra Technologies
Posted 1 week ago
6.0 - 11.0 years
8 - 18 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Hi We have Excellent opportunity with TOP MNC Company for Permanent position Skill: Java, , Microservices , Javascript , Springboot, • Experience NoSQL databases - Mongo DB Exp: 5 plus Years Location:Bangalore,Hyderabad,Pune,Kerala 5+ years of experience developing Backend, API applications/software • Expert working experience in Java, , Microservices , Javascript , Springboot, • Experience NoSQL databases - Mongo DB • Require experience and Strong understanding of entire Software Development Life Cycle (SDLC), Agile (Scrum), • Experience with web services (consuming or creating) with REST, MQTT, Web Sockets • Good experience with Micro-services architecture, Pub-sub model ,working on cloud architecture, QA automation, CI / CD pipelines ,application security and load testing ,3rd party integration(Dockers, Kubernetes) and management • Experience managing Cloud infrastructure (resources and service) in AWS, Azure and/or GCP • Strong knowledge of SOA, object-oriented programming, design patterns, multi-threaded application development • Experience in reporting and analytic, queuing and real-time streaming systems • Experience developing, maintaining and innovating large scale web or mobile applications • BE/ B.Tech / MCA / M. Tech in computer programming, computer science, or If you are interested kindly revert back with updated resume Thanks for applying Regards, Jamuna 9916995347 s.jamuna@randstaddigital.com
Posted 1 week ago
13.0 - 16.0 years
15 - 22 Lacs
Hyderabad, Bangalore Rural, Chennai
Work from Office
Company Name: Leading General Insurance company (Chennai ) Industry: General Insurance Role - Data Platform and Ingestion Manager mail at manjeet.kaur@mounttalent.com whatsapp at 8384077438 Years of Experience 13 -18 Years Purpose This role will own the data ingestion and storage processes and functions on the data platform. Their role is to ensure timely delivery of data to the org for modeling, reporting and analysis Finally, this role will be required to work closely with the Data Engineering Head to support the establishment of a new Data Ops model working towards a safe, well controlled, platform that will enable further progression of the Cholas Data Strategy roadmap. Key Responsibilities Responsibilities will be expected to be developed and will include but will not be restricted to: Data Ingestion create and maintain all data ingestion pipelines from selected source systems to the data platform. Team Leadership: Lead a data engineering support and minor enhancements team through both day-to-day data management (BAU) activities and change projects, defining and orchestrating tasks and deliverables in support of the Project Manager and acting as Scrum Master as appropriate. Controls & Standards: Work in collaboration with Data Engineers and Architects and function as a senior contributor on the design, build, and management of the Data Platform; taking direct ownership of controls and standards to ensure that all new data requirements are met using the most appropriate controls and engineering practices. Business Stakeholder Engagement: Set up the necessary governance forums with program stakeholders (both business and IT) to define and execute against a platform technology roadmap, business change, and data product build plan. Team Management – Take ownership of the change/minor enhancements and support team and play a leading role in team development while holding team accountable for their commitments, removing roadblocks to their work; using organizational resources to improve capacity for project work; and mentoring and developing team members, and lead recruitment activities when required. Technical and Qualitative requirements Experience in the field of AWS based data engineering with proven experience in orchestrating and governing data pipelines, integration, data modelling, release management. Experience of data product development (MI/BI) and continuous improvement with proven record of successfully implementing data projects. Solid firsthand experience with AWS data systems and tools. Substantial experience of leading DevOps sprint teams using Scrum; translating business requirements into deliverable user stories and defined sprints, with experience acting as a Scrum master; leading daily stand-ups, coordinating development activity and monitoring progress. Solid understanding of software development life cycle models and introducing continuous improvement development activities, managing the balance of new product and CI sprints Balanced business background. Background in Insurance sector will be preferred. Strong people skills including mentoring, coaching, collaborating, and team building Strong analytical, planning, and organizational skills with an ability to manage competing demands Excellent oral and written communications skills and experience interacting with both business and IT individuals at all levels including the executive level
Posted 1 week ago
3.0 - 8.0 years
7 - 17 Lacs
Noida
Work from Office
About The Job Position Title: Big Data Admin Department: Product & Engineering Job Scope: India Location: Noida, India Reporting to: Director- DevOps Work Setting: Onsite Purpose of the Job As we are working on Big Data stack as well; where we need to setup, optimise, and maintain multiple services of Big Data clusters. We have expertise in AWS, Cloud, security, cluster etc. but now, we need a special expertise person (Big-Data Admin) who can help us to setup and maintain Bigdata environment in better way and keep it live with multi-cluster setup in Production environment. Key Responsibilities Participate in requirements analysis. Write clean, scalable jobs. Collaborate with internal teams to produce solutions and architecture. Test and deploy applications and systems. Revise, update, refactor, and debug code. Improve existing software. Serve as an expert on Big Data stack and provide technical support Qualifications Requirement: Experience, Skills & Education Graduate with 3+ years of Experience in Big Data technology Expertise in Hadoop, Yarn, Spark, Airflow, Cassandra, ELK, Redis, Grafana etc Expertise in cloud managed Bigdata stack like MWAA, EMR, EKS etc Good Knowledge of python and scripting Knowledge of optimization and performance tuning for Bigdata stack Troubleshoot skill is a must. Must have good knowledge of Linux OS and troubleshooting. Desired Skills Bigdata stack. Linux OS and troubleshooting Why Explore a Career: Be a Part of the Revolution in Healthcare Marketing. Innovate with Us to Unite and Transform the Healthcare Providers (HCPs) - Ecosystem for Improved Patient Outcomes. It has been recognized and certified two times in a row Best places to work NJ 2023, Great Place to work 2023. If you are passionate about health technology and have a knack for turning complex concepts into compelling narratives, we invite you to apply for this exciting opportunity to contribute to the success of our innovative health tech company. Below are the competitive benefits that will be provided to the selected candidates basis their location. Competitive Salary Package Generous Leave Policy Flexible Working Hours Performance-Based Bonuses Health Care Benefits
Posted 1 week ago
10.0 - 19.0 years
25 - 40 Lacs
Chennai
Work from Office
Role & responsibilities A minimum of 10+ Years experience in the IT industry. Develop a very high sense of ownership, the zeal to build scalable applications. Collaborate with team members to brainstorm the requirements and provide effective solutions. Document and demonstrate solutions by developing documentation and flowcharts. Prepare and maintain code for various .Net applications and resolve any defects in the system. Utilize established development tools, guidelines and conventions including but not limited to .NET (.NET Core 6, .NET Framework), SQL Server, MVC, HTML, CSS, JavaScript (Vue, Angular), and C#. Perform design and development of web based services and applications. Work closely with the quality assurance team to ensure delivery of high quality and reliable web applications. Develop databases including queries, triggers and stored procedures. Interact with customers to define project features and requirements. Perform code reviews and provide necessary corrections. Perform application design, development and deployment based on industry's best practices. Resolve application defects and issues in a timely manner. Prepare technical documents as per established project standards. Works collaboratively with leaders to ensure timely delivery of projects. Enhance existing systems by analyzing business objectives, preparing an action plan and identifying areas for modification and improvement. Preferred candidate profile Strong knowledge of C#, .NET framework includes ASP.NET, ASP.NET MVC, .NET Core, Web API, OAuth, IIS, WCF Web Services, design patterns. Strong knowledge in AJAX, AngularJS/ ReactJS, Web Forms, ADO.NET, LINQ, Linq2Sql, Entity Framework, and NHibernate. Strong understanding of object-oriented programming and strong coding skills. Knowledge of Microsoft SQL Server and NoSQL databases. Knowledge of pub sub modules (like Kafka, SQS, RabbitMQ etc) Experience with popular web application frameworks. Knack for writing clean, readable, and easily maintainable code. Proficient understanding of code versioning tools such as Git, SVN. Knowledge in AWS cloud platform and AWS Services will be an added advantage Understanding of Agile - SCRUM methodologies. Excellent communication, analytical and interpersonal skills. Understanding of Business requirements, analyzing, designing (User Interface and Database). Ability to work independently. Excellent Debugging and Problem Solving skills. Ability to work effectively in a remote, virtual, global environment. Experienced in architecting enterprise applications or aspiring to be one Perks and benefits Salary best in industry Hybrid
Posted 1 week ago
8.0 - 13.0 years
30 - 35 Lacs
Bengaluru
Work from Office
About The Role Data Engineer -1 (Experience 0-2 years) What we offer Our mission is simple Building trust. Our customer's trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. That"™s why, we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many. About our team DEX is a central data org for Kotak Bank which manages entire data experience of Kotak Bank. DEX stands for Kotak"™s Data Exchange. This org comprises of Data Platform, Data Engineering and Data Governance charter. The org sits closely with Analytics org. DEX is primarily working on greenfield project to revamp entire data platform which is on premise solutions to scalable AWS cloud-based platform. The team is being built ground up which provides great opportunities to technology fellows to build things from scratch and build one of the best-in-class data lake house solutions. The primary skills this team should encompass are Software development skills preferably Python for platform building on AWS; Data engineering Spark (pyspark, sparksql, scala) for ETL development, Advanced SQL and Data modelling for Analytics. The org size is expected to be around 100+ member team primarily based out of Bangalore comprising of ~10 sub teams independently driving their charter. As a member of this team, you get opportunity to learn fintech space which is most sought-after domain in current world, be a early member in digital transformation journey of Kotak, learn and leverage technology to build complex data data platform solutions including, real time, micro batch, batch and analytics solutions in a programmatic way and also be futuristic to build systems which can be operated by machines using AI technologies. The data platform org is divided into 3 key verticals: Data Platform This Vertical is responsible for building data platform which includes optimized storage for entire bank and building centralized data lake, managed compute and orchestrations framework including concepts of serverless data solutions, managing central data warehouse for extremely high concurrency use cases, building connectors for different sources, building customer feature repository, build cost optimization solutions like EMR optimizers, perform automations and build observability capabilities for Kotak"™s data platform. The team will also be center for Data Engineering excellence driving trainings and knowledge sharing sessions with large data consumer base within Kotak. Data Engineering This team will own data pipelines for thousands of datasets, be skilled to source data from 100+ source systems and enable data consumptions for 30+ data analytics products. The team will learn and built data models in a config based and programmatic and think big to build one of the most leveraged data model for financial orgs. This team will also enable centralized reporting for Kotak Bank which cuts across multiple products and dimensions. Additionally, the data build by this team will be consumed by 20K + branch consumers, RMs, Branch Managers and all analytics usecases. Data Governance The team will be central data governance team for Kotak bank managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform. If you"™ve right data skills and are ready for building data lake solutions from scratch for high concurrency systems involving multiple systems then this is the team for you. You day to day role will include Drive business decisions with technical input and lead the team. Design, implement, and support an data infrastructure from scratch. Manage AWS resources, including EC2, EMR, S3, Glue, Redshift, and MWAA. Extract, transform, and load data from various sources using SQL and AWS big data technologies. Explore and learn the latest AWS technologies to enhance capabilities and efficiency. Collaborate with data scientists and BI engineers to adopt best practices in reporting and analysis. Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Build data platforms, data pipelines, or data management and governance tools. BASIC QUALIFICATIONS for Data Engineer/ SDE in Data Bachelor's degree in Computer Science, Engineering, or a related field Experience in data engineering Strong understanding of AWS technologies, including S3, Redshift, Glue, and EMR Experience with data pipeline tools such as Airflow and Spark Experience with data modeling and data quality best practices Excellent problem-solving and analytical skills Strong communication and teamwork skills Experience in at least one modern scripting or programming language, such as Python, Java, or Scala Strong advanced SQL skills PREFERRED QUALIFICATIONS AWS cloud technologiesRedshift, S3, Glue, EMR, Kinesis, Firehose, Lambda, IAM, Airflow Prior experience in Indian Banking segment and/or Fintech is desired. Experience with Non-relational databases and data stores Building and operating highly available, distributed data processing systems for large datasets Professional software engineering and best practices for the full software development life cycle Designing, developing, and implementing different types of data warehousing layers Leading the design, implementation, and successful delivery of large-scale, critical, or complex data solutions Building scalable data infrastructure and understanding distributed systems concepts SQL, ETL, and data modelling Ensuring the accuracy and availability of data to customers Proficient in at least one scripting or programming language for handling large volume data processing Strong presentation and communications skills.
Posted 1 week ago
9.0 - 14.0 years
30 - 35 Lacs
Bengaluru
Work from Office
About The Role Data Engineer -2 (Experience 2-5 years) What we offer Our mission is simple Building trust. Our customer's trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. That"™s why, we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many. About our team DEX is a central data org for Kotak Bank which manages entire data experience of Kotak Bank. DEX stands for Kotak"™s Data Exchange. This org comprises of Data Platform, Data Engineering and Data Governance charter. The org sits closely with Analytics org. DEX is primarily working on greenfield project to revamp entire data platform which is on premise solutions to scalable AWS cloud-based platform. The team is being built ground up which provides great opportunities to technology fellows to build things from scratch and build one of the best-in-class data lake house solutions. The primary skills this team should encompass are Software development skills preferably Python for platform building on AWS; Data engineering Spark (pyspark, sparksql, scala) for ETL development, Advanced SQL and Data modelling for Analytics. The org size is expected to be around 100+ member team primarily based out of Bangalore comprising of ~10 sub teams independently driving their charter. As a member of this team, you get opportunity to learn fintech space which is most sought-after domain in current world, be a early member in digital transformation journey of Kotak, learn and leverage technology to build complex data data platform solutions including, real time, micro batch, batch and analytics solutions in a programmatic way and also be futuristic to build systems which can be operated by machines using AI technologies. The data platform org is divided into 3 key verticals: Data Platform This Vertical is responsible for building data platform which includes optimized storage for entire bank and building centralized data lake, managed compute and orchestrations framework including concepts of serverless data solutions, managing central data warehouse for extremely high concurrency use cases, building connectors for different sources, building customer feature repository, build cost optimization solutions like EMR optimizers, perform automations and build observability capabilities for Kotak"™s data platform. The team will also be center for Data Engineering excellence driving trainings and knowledge sharing sessions with large data consumer base within Kotak. Data Engineering This team will own data pipelines for thousands of datasets, be skilled to source data from 100+ source systems and enable data consumptions for 30+ data analytics products. The team will learn and built data models in a config based and programmatic and think big to build one of the most leveraged data model for financial orgs. This team will also enable centralized reporting for Kotak Bank which cuts across multiple products and dimensions. Additionally, the data build by this team will be consumed by 20K + branch consumers, RMs, Branch Managers and all analytics usecases. Data Governance The team will be central data governance team for Kotak bank managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform. If you"™ve right data skills and are ready for building data lake solutions from scratch for high concurrency systems involving multiple systems then this is the team for you. You day to day role will include Drive business decisions with technical input and lead the team. Design, implement, and support an data infrastructure from scratch. Manage AWS resources, including EC2, EMR, S3, Glue, Redshift, and MWAA. Extract, transform, and load data from various sources using SQL and AWS big data technologies. Explore and learn the latest AWS technologies to enhance capabilities and efficiency. Collaborate with data scientists and BI engineers to adopt best practices in reporting and analysis. Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Build data platforms, data pipelines, or data management and governance tools. BASIC QUALIFICATIONS for Data Engineer/ SDE in Data Bachelor's degree in Computer Science, Engineering, or a related field Experience in data engineering Strong understanding of AWS technologies, including S3, Redshift, Glue, and EMR Experience with data pipeline tools such as Airflow and Spark Experience with data modeling and data quality best practices Excellent problem-solving and analytical skills Strong communication and teamwork skills Experience in at least one modern scripting or programming language, such as Python, Java, or Scala Strong advanced SQL skills PREFERRED QUALIFICATIONS AWS cloud technologiesRedshift, S3, Glue, EMR, Kinesis, Firehose, Lambda, IAM, Airflow Prior experience in Indian Banking segment and/or Fintech is desired. Experience with Non-relational databases and data stores Building and operating highly available, distributed data processing systems for large datasets Professional software engineering and best practices for the full software development life cycle Designing, developing, and implementing different types of data warehousing layers Leading the design, implementation, and successful delivery of large-scale, critical, or complex data solutions Building scalable data infrastructure and understanding distributed systems concepts SQL, ETL, and data modelling Ensuring the accuracy and availability of data to customers Proficient in at least one scripting or programming language for handling large volume data processing Strong presentation and communications skills.
Posted 1 week ago
5.0 - 10.0 years
20 - 25 Lacs
Bengaluru
Work from Office
• Familiarity with End-2-End Machine Learning model development life cycle. • Experience in creating data processing pipelines and API development (Fast API). • Experience with SQL and NoSQL, MLflow, GitHub, docker, Kubernetes, ELK, or similar stack.
Posted 1 week ago
5.0 - 7.0 years
8 - 14 Lacs
Hyderabad
Work from Office
About the Role : We are seeking a highly motivated and experienced Senior Backend Engineer to join our growing team. You will play a key role in building, improving, and maintaining our software solutions while effectively collaborating with US-based clients. Your strong technical skills will be complemented by your excellent communication abilities, allowing you to bridge the gap between technical development and client needs. Responsibilities : - Design, develop, and implement robust and scalable backend features using Node.js and Typescript. - Collaborate with product managers and designers to understand client requirements and translate them into technical specifications. - Advocate for and implement test-driven development (TDD) practices to ensure code quality, maintainability, and testability. - Write clean, maintainable, and well-documented code adhering to best practices. - Troubleshoot and resolve complex technical problems related to the backend. - Continuously learn and apply new technologies, methodologies, and languages. - Effectively communicate technical concepts, progress updates, and solutions to both technical and non-technical audiences, including US-based clients. - Identify areas for improvement in development efficiency and propose solutions to reduce "technical debt." - Work independently on smaller features and collaboratively with the team on larger projects. Technical Skills & Experience : - 5+ years of experience in backend development with a strong track record of delivering high-quality software. - Proficiency in Node.js and Typescript. - Experience with relational databases (e.g., Postgres, MySQL) and/or NoSQL databases (e.g. MongoDB). - Working knowledge of containerization technologies like Docker and Docker Compose. - Experience with CI/CD pipelines using tools like Jenkins or GitHub Actions. - Familiarity with unit testing frameworks like Jest or similar tools. - Understanding of Agile methodologies for software development. - Experience with AWS Cloud, preferably with serverless architecture concepts (a plus).
Posted 1 week ago
8.0 - 12.0 years
20 - 27 Lacs
Nagercoil
Hybrid
Role & responsibilities Lead End-to-End ML Projects: Own the full lifecycle of ML solutions from problem scoping, data exploration, model development, deployment, and post-deployment monitoring. Own the Gen AI Strategy: Define and drive the technical vision and execution strategy for Gen AI use cases across the organization. Architect Scalable Solutions: Design robust ML systems and pipelines that integrate seamlessly into production environments. Team Leadership: Mentor and guide a team of ML engineers and data scientists, fostering a culture of technical excellence, innovation, and continuous learning. Modeling & Research: Stay up to date with the latest advancements in ML/AI and guide experimentation with novel algorithms (e.g., deep learning, NLP, GenAI, reinforcement learning). Build RAG & Prompt Engineering Pipelines: Design scalable architectures for RAG, vector search, prompt tuning, and prompt chaining with tools like LangChain, LlamaIndex, and FAISS. Stakeholder Collaboration: Work closely with product managers, engineers, and domain experts to translate business problems into technical solutions. Operationalize ML: Partner with MLOps/DevOps teams to deploy, monitor, and maintain ML models in production, ensuring reliability and scalability. Preferred candidate profile Bachelors or Master’s degree in Computer Science, Machine Learning, Data Science, or related field (PhD is a plus). 8+ years of hands-on experience in machine learning, with at least 1–2 years in a leadership or mentorship role. Proficiency in Python and ML libraries such as scikit-learn, TensorFlow, PyTorch, HuggingFace Transformers, etc. Strong understanding of statistics, optimization, and model evaluation techniques. Experience with cloud platforms (AWS, GCP, or Azure) and tools like Docker, Kubernetes, MLflow, or Kubeflow. Proficiency in Python and Gen AI frameworks such as LangChain, Crew AI, or similar. Proven track record of deploying ML models into production at scale. Excellent communication and leadership skills.
Posted 1 week ago
7.0 - 11.0 years
0 - 1 Lacs
New Delhi, Bengaluru, Mumbai (All Areas)
Work from Office
Your potential, unleashed. Indias impact on the global economy has increased at an exponential rate and Deloitte presents an opportunity to unleash and realize your potential amongst cutting edge leaders, and organizations shaping the future of the region, and indeed, the world beyond. At Deloitte, your whole self to work, every day. Combine that with our drive to propel with purpose and you have the perfect playground to collaborate, innovate, grow, and make an impact that matters. The team Deloitte helps organizations prevent cyberattacks and protect valuable assets. We believe in being secure, vigilant, and resilientnot only by looking at how to prevent and respond to attacks, but at how to manage cyber risk in a way that allows you to unleash new opportunities. Embed cyber risk at the start of strategy development for more effective management of information and technology risks Your work profile As a professional in our NAT Team, youll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations: - Location and way of working Base location: PAN India Work you’ll do: Review EL/SOW and provide guidance to engagement teams within the stipulated timeline. Identify the risks related to various sections of EL/SOW (Scope, Client and Deloitte responsibilities, Fee, and Payment Schedule, etc.). Assist engagement teams by adding appropriate comments/assumptions to proactively mitigate identified risks. Advise on the usage of correct EL/SOW template and assess the validity and applicability of the governing documents (GBT/MSA/LIA/addendums, if any). Advise the team to obtain necessary approvals/clearances as per QRM process – NSI, DCCS, GBT/MSA, DRB, etc. Coordinate with internal stakeholders (e.g., legal team, RRO team) and collaborate with EL/SOW team-members in connection with the review and related matters. Mentor and guide team members and share expertise and knowledge on an ongoing basis. B.E./B.Tech. /MCA and MBA/PGDM or equivalent from a recognized Institute/University At least 6 years of related experience in areas of diverse IT project delivery, contract risk management, preferably in consulting industry Good communication and relationship management skills Ability to understand broader business issues. Experience in project delivery/management (Technology implementation, advisory, operate etc.) or knowledge of emerging technologies such as AI, Devops, Cloud services, Data analytics etc. Good understanding of Software Development Life Cycle (SDLC) Experience of working in Client-facing roles for consulting assignments How you’ll grow Connect for impact Our exceptional team of professionals across the globe are solving some of the world’s most complex business problems, as well as directly supporting our communities, the planet, and each other. Know more in our Global Impact Report and our India Impact Report. Empower to lead You can be a leader irrespective of your career level. Our colleagues are characterised by their ability to inspire, support, and provide opportunities for people to deliver their best and grow both as professionals and human beings. Know more about Deloitte and our One Young World partnership. Inclusion for all At Deloitte, people are valued and respected for who they are and are trusted to add value to their clients, teams and communities in a way that reflects their own unique capabilities. Know more about everyday steps that you can take to be more inclusive. At Deloitte, we believe in the unique skills, attitude and potential each and every one of us brings to the table to make an impact that matters. Drive your career At Deloitte, you are encouraged to take ownership of your career. We recognise there is no one size fits all career path, and global, cross-business mobility and up / re-skilling are all within the range of possibilities to shape a unique and fulfilling career. Know more about Life at Deloitte. Everyone’s welcome entrust your happiness to us Our workspaces and initiatives are geared towards your 360-degree happiness. This includes specific needs you may have in terms of accessibility, flexibility, safety and security, and caregiving. Here’s a glimpse of things that are in store for you.
Posted 1 week ago
2.0 - 6.0 years
0 - 1 Lacs
Pune
Work from Office
As Lead ML Engineer , you'll lead the development of predictive models for demand forecasting, customer segmentation, and retail optimization, from feature engineering through deployment. As Lead ML Engineer, you'll lead the development of predictive models for demand forecasting, customer segmentation, and retail optimization, from feature engineering through deployment. Responsibilities: Build and deploy models for forecasting and optimization Perform time-series analysis, classification, and regression Monitor model performance and integrate feedback loops Use AWS SageMaker, MLflow, and explainability tools (e.g., SHAP or LIME)
Posted 1 week ago
4.0 - 8.0 years
7 - 11 Lacs
Pune
Work from Office
Job Title : Python Devloper Location State : Maharashtra Location City : Pune Experience Required : 4 to 6 Year(s) CTC Range : 7 to 11 LPA Shift: Day Shift Work Mode: Onsite Position Type: C2H Openings: 2 Company Name: VARITE INDIA PRIVATE LIMITED About The Client: Check in section - (Supplier performance audit) About The Job: NA Essential Job Functions: NA Qualifications: Skill Required: Digital : Python, Digital : Client Web Service(AWS) Cloud Computing, PostgreSQL Experience Range in Required Skills: 4-6 years Essential Skills: Python and PostgreSQL How to Apply: Interested candidates are invited to submit their resume using the apply online button on this job post. Equal Opportunity Employer: VARITE is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We do not discriminate on the basis of race, color, religion, sex, sexual orientation, gender identity or expression, national origin, age, marital status, veteran status, or disability status. Unlock Rewards: Refer Candidates and Earn. If you're not available or interested in this opportunity, please pass this along to anyone in your network who might be a good fit and interested in our open positions. VARITE offers a Candidate Referral program, where you'll receive a one-time referral bonus based on the following scale if the referred candidate completes a three-month assignment with VARITE. Exp Req - Referral Bonus 0 - 2 Yrs. - INR 5,000 2 - 6 Yrs. - INR 7,500 6 + Yrs. - INR 10,000 About VARITE: VARITE is a global staffing and IT consulting company providing technical consulting and team augmentation services to Fortune 500 Companies in USA, UK, CANADA and INDIA. VARITE is currently a primary and direct vendor to the leading corporations in the verticals of Networking, Cloud Infrastructure, Hardware and Software, Digital Marketing and Media Solutions, Clinical Diagnostics, Utilities, Gaming and Entertainment, and Financial Services.
Posted 1 week ago
3.0 - 6.0 years
18 - 22 Lacs
Hyderabad
Work from Office
LRR Technologies is currently hiring talented people to work with Carrier Corporation. Carrier Corporation (NYSE: CARR) is a global provider of innovative heating, ventilating, and air conditioning (HVAC), refrigeration, fire, security, and building automation technologies boasting annual sales of over $20 billion, with 58,000 employees spread across 160+ countries. For its R&D center in Hyderabad, Carrier is looking for a talented SQA Engineer. Carrier was founded in 1915 as an independent, American company, manufacturing and distributing heating, ventilating and air conditioning (HVAC) systems, as well as commercial refrigeration and food service equipment. Built on Willis Carrier's invention of modern air conditioning in 1902, Carrier is a world leader in heating, air-conditioning and refrigeration solutions. We constantly build upon our history of proven innovation with new products and services that improve global comfort and efficiency. Job Responsibilities: The engineer must work as an individual contributor, be able to enhance the automation framework with the requirements provided by the stakeholders. The framework enhancements must be reliable and improve the teams efficiency to reduce/ avoid the manual activities. The enhancements include Overall framework review and enhancements, test script development, enhanced/ improved test reporting. Run and execute the scripts in cloud environment in AWS. Identify and resolve issues related to script execution and cloud infrastructure. Optimize scripts for better performance and resource utilization. Education: BE/ BTech degree in Electrical, Computer Engineering or Computer Science with 3 plus years of product validation experience. Required Technical Skills: Experience of different software test techniques and QA methodologies. Understanding of the software development life cycle and processes. Experience with web applications including. Experience with Core Java. Experience with automation test frameworks (Selenium etc). Proficiency in AWS cloud platform for script execution. Experience with web server-based deployments Knowledge of JIRA project management tool. Experience with Continuous integration Jenkins or any other tool. Be a key participant in creating a Quality First and Zero Defects culture. Preferred Technical Skills: Experience with Building Automation and/or HVAC domain. Strong understanding of computer architecture and networking concepts. Additional Comments: This individual must be self-directed, highly motivated, and organized with strong analytical thinking and problem-solving skills, and an ability to work on multiple projects and function in a team environment. Perks and benefits: The position will pay quite well - Rs. 18 - 22.5 lakh per annum is the band, and the final amount may be even higher depending on your experience and expertise. If made an offer, you will need to join in 4 weeks. This is a lifetime opportunity for people looking to specialize in highly coveted niche futuristic skills and to work in a top multinational company with excellent employee-first initiatives. We look forward to your application.
Posted 1 week ago
9.0 - 14.0 years
20 - 35 Lacs
Hyderabad
Hybrid
Job Summary: We are seeking a highly skilled and experienced Lead Infrastructure Engineer to join our dynamic team. The ideal candidate will be passionate about building and maintaining complex systems, with a holistic approach to architecting infrastructure that survives and thrives in production. You will play a key role in designing, implementing, and managing cloud infrastructure, ensuring scalability, availability, security, and optimal performance vs spend. You will also provide technical leadership and mentorship to other engineers, and engage with clients to understand their needs and deliver effective solutions. Responsibilities: Design, architect, and implement scalable, highly available, and secure infrastructure solutions, primarily on Amazon Web Services (AWS). Develop and maintain Infrastructure as Code (IaC) using Terraform or AWS CDK for enterprise-scale maintainability and repeatability. Implement robust access control via IAM roles and policy orchestration, ensuring least-privilege and auditability across multi-environment deployments. Contribute to secure, scalable identity and access patterns, including OAuth2-based authorization flows and dynamic IAM role mapping across environments. Support deployment of infrastructure lambda functions. Troubleshoot issues and collaborate with cloud vendors on managed service reliability and roadmap alignment. Utilize Kubernetes deployment tools such as Helm/Kustomize in combination with GitOps tools such as ArgoCD for container orchestration and management. Design and implement CI/CD pipelines using platforms like GitHub, GitLab, Bitbucket, Cloud Build, Harness, etc., with a focus on rolling deployments, canaries, and blue/green deployments. Ensure auditability and observability of pipeline states. Implement security best practices, audit, and compliance requirements within the infrastructure. Provide technical leadership, mentorship, and training to engineering staff. Engage with clients to understand their technical and business requirements, and provide tailored solutions. If needed, lead agile ceremonies and project planning, including developing agile boards and backlogs with support from our Service Delivery Leads. Troubleshoot and resolve complex infrastructure issues. Potentially participate in pre-sales activities and provide technical expertise to sales teams. Qualifications: 10+ years of experience in an Infrastructure Engineer or similar role. Extensive experience with Amazon Web Services (AWS). Proven ability to architect for scale, availability, and high-performance workloads. Ability to plan and execute zero-disruption migrations. Experience with enterprise IAM and familiarity with authentication technology such as OAuth2 and OIDC. Deep knowledge of Infrastructure as Code (IaC) with Terraform and/or AWS CDK. Strong experience with Kubernetes and related tools (Helm, Kustomize, ArgoCD). Solid understanding of git, branching models, CI/CD pipelines and deployment strategies. Experience with security, audit, and compliance best practices. Excellent problem-solving and analytical skills. Strong communication and interpersonal skills, with the ability to engage with both technical and non-technical stakeholders. Experience in technical leadership, mentoring, team-forming and fostering self-organization and ownership. Experience with client relationship management and project planning. Certifications: Relevant certifications (for example Kubernetes Certified Administrator, AWS Certified Solutions Architect - Professional, AWS Certified DevOps Engineer - Professional etc.). Software development experience (for example Terraform, Python). Experience with machine learning infrastructure. Education: B.Tech /BE in computer science, a related field or equivalent experience.
Posted 1 week ago
4.0 - 9.0 years
0 - 3 Lacs
Thane, Mumbai (All Areas)
Work from Office
Must Have Skills: Mendix , POSTGRES SQL,Java, AWS Cloud Services, EC2instance, Python, Java Role & responsibilities : Developer
Posted 1 week ago
2.0 - 5.0 years
4 - 7 Lacs
Noida, Greater Noida, Delhi / NCR
Hybrid
Hiring: NOC Operations Engineer | Location: Noida (Hybrid) | Candidates must be open to rotational shifts . We are looking for professionals with 25 years of experience in network/server monitoring and IT infrastructure support . Key skills: LogicMonitor, Dynatrace, AWS CloudWatch, Excel, ticketing tools, basic networking(TCP/IP, DNS, VPN) . Responsibilities include real-time system monitoring, incident detection, basic troubleshooting, log analysis , and collaboration with internal teams/vendors to ensure system uptime. Work with a US-based global IT services firm with presence in 10 countries.
Posted 1 week ago
5.0 - 8.0 years
1 - 2 Lacs
Bengaluru
Hybrid
Job Description Summary Embark on a transformative career as a Guidewire Cloud Platform Software Engineer, where you will be at the forefront of revolutionizing how business leverage cloud technologies. We are seeking talented individuals to join our innovative team, where your expertise will be instrumental in designing, implementing, and optimizing robust cloud solutions. Guidewire provides outstanding software for the second-largest financial services industry in the world: insurance. We deliver the core applications that Property and Casualty (P/C) insurers use to build their products, sell policies, se ttle claims, and bill their customers. We deliver SaaS solutions via Guidewire Cloud that enable our customers to rapidly innovate and drive measurable value. Job Description What you should know about this role: Develop Multi-Tenant Cloud Platform that caters to running all Guidewire applications and services-https://medium.com/guidewire-engineering-blog/guidewire-cloud-why-hybrid-tenancy-is-the-right-choice-56a0ff176032 Deeply involved in design and development of GWCP/ATMOS - infrastructure as code using technologies such as EKS (Kubernetes), Terraform, Golang. In addition handle Observability of the Platform (logs, metrics and traces). (Refer: https://medium.com/guidewire-engineering-blog/log-management-and-guidewire-cloud-platform-observability-73a033a34e9a) Design and develop Platform services that solves problems such as Developer Experience, Authentication, Authorization using Java, Spring Boot Engineer quality, scalability, availability, and security into your code Employ TDD - Test driven development, to protect your products with the assurances of automated testing with Test first culture Deploy containerized applications to AWS through your Continuous Integration (CI) and Continuous Deployment Enjoy work as you solve problems daily by working with highly skilled team members in a pairing culture (Refer: https://medium.com/guidewire-engineering-blog/a-day-in-the-life-of-a-cloud-common-services-engineer-c67cf3debc57) What you get to do : Strong Programming and software development skills following frameworks & languages: Java, Spring boot, Microservices Container technologies: Docker / equivalent in a Cloud ecosystem Team first attitude, coupled with curiosity, high potential to learn new technologies and gain strong technical skills Comfort working in an agile, social and fast-paced environment What you should know about this role: Any graduation/Postgraduate in Computer Science or equivalent industry technical skills 5+ years of work experience in a SaaS/PaaS environment for large-scale enterprise Advocate of cloud platforms (like Kubernetes / Mesos / Cloud Foundry / OpenShift, AWS / GCP / Azure, Serverless) Prior experience with Infrastructure as Code and configuration management, using tools like Terraform and Ansible Strong Experience with Java, Spring Boot, Microservices programming skills Good Experience in Data Structure, OOPS, Design patterns, Multithreading If interested Please share your details to anaik@guidewire.com along with your compensation details and updated resume 1)Total Exp- 2)Relevant Exp- 3)Current CTC- 4)Expected CTC- 5)Notice Period 6)Current Location- 7)Exp in Java Programming and Coding(Out of 10) 8)Exp in Kubernetes: 9)Exp in Docker,Terraform,Ansible(Good to have any one) 10)Cloud experience in AWS, Azure or any public cloud: 11)Ready to come for F2F for One day process in Weekdays from Tuesday to Thursday, first 2 rounds will be Online after you come for F2F shortlisting on the same day in our office: 12)Venue Address: Elnath B Wing, Exora Business Park, PRESTIGE TECH PARK, Kadabeesanahalli near to Marathahalli Bengaluru, Karnataka 560103 Perks and benefits Flexible work environment Health and Wellness benefits Paid time off programs including volunteer time off Market-competitive pay and incentive programs Continual development and internal career growth opportunities
Posted 1 week ago
8.0 - 10.0 years
35 - 40 Lacs
Hyderabad
Hybrid
Job Title: Engineering Lead We are looking for a highly motivated Software Engineer to join our Robotics Software team. This is a coding-heavy role that blends backend system design , robotics development , and cloud integration . You'll work across modern infrastructure, real-time robotics frameworks, and GPU-accelerated systems to deliver scalable, autonomous solutions deployed in the real world. Key Responsibilities: Design and implement robust backend services and cloud APIs for robotic systems Develop reusable modules in ROS / ROS 2 for sensor integration, autonomy workflows, and control systems Write performant and maintainable code in Python and C++ , with attention to algorithmic efficiency Build tools for robotic diagnostics, fleet telemetry, logging, and simulation Deploy and manage containerized microservices using Docker , with CI/CD pipelines Optimize GPU-accelerated inference on NVIDIA Jetson (Orin/Xavier) platforms Debug and troubleshoot distributed robotic systems in live deployments and simulation Build automation scripts and monitoring tools for OTA updates, health checks, and telemetry pipelines Collaborate with autonomy, perception, hardware, and cloud teams for end-to-end system integration Required Skills & Experience: Strong proficiency in Python and C++ , with ability to write clean, modular, and testable code Solid understanding of data structures , algorithms, and design patterns especially for performance-critical code Hands-on experience with ROS / ROS 2 , including launch files, services, topics, and custom messages Comfortable in Linux-based development environments , including shell scripting and debugging Proficient with SQL databases (PostgreSQL, MySQL) and designing performant queries Experience working with microservices architecture , REST APIs, and containerized deployments Working knowledge of AWS cloud services (Lambda, API Gateway, EC2, S3, etc.) Familiarity with GPU-accelerated workloads (e.g., TensorRT, OpenCV CUDA, CUDA kernels) on Jetson devices Experience with observability tools like Grafana , New Relic , or similar logging frameworks Bonus Skills: Frontend experience with ReactJS and TypeScript Robotics simulation experience (Gazebo, Ignition, RViz) Understanding of robotic fleet security, OTA update systems, and telemetry protocols Exposure to CI tools (GitHub Actions, GitLab CI/CD, Jenkins) What Were Looking For: A strong coder with real-world experience in both robotics and backend system design Self-starter with an ownership mindset and ability to work independently Team player who thrives in fast-paced environments with hardware/software interaction Clear communicator and problem-solver passionate about real-world robotics
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane