Home
Jobs

8 Msk Jobs

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 8.0 years

10 - 15 Lacs

Kochi

Remote

Naukri logo

We are looking for a skilled AWS Cloud Engineer with a minimum of 5 years of hands-on experience in managing and implementing cloud-based solutions on AWS. The ideal candidate will have expertise in AWS core services such as S3, EC2, MSK, Glue, DMS, and SageMaker, along with strong programming and containerization skills using Python and Docker.Design, implement, and manage scalable AWS cloud infrastructure solutions. Hands-on experience with AWS services: S3, EC2, MSK, Glue, DMS, and SageMaker. Develop, deploy, and maintain Python-based applications in cloud environments. Containerize applications using Docker and manage deployment pipelines. Troubleshoot infrastructure and application issues, review designs, and code solutions. Ensure high availability, performance, and security of cloud resources. Collaborate with cross-functional teams to deliver reliable and scalable solutions.

Posted 1 week ago

Apply

10.0 - 15.0 years

12 - 17 Lacs

Bengaluru

Work from Office

Naukri logo

Grade : 7 Purpose of your role This role sits within the ISS Data Platform Team. The Data Platform team is responsible for building and maintaining the platform that enables the ISS business to operate. This role is appropriate for a Lead Data Engineer capable of taking ownership and a delivering a subsection of the wider data platform. Key Responsibilities Design, develop and maintain scalable data pipelines and architectures to support data ingestion, integration and analytics. Be accountable for technical delivery and take ownership of solutions. Lead a team of senior and junior developers providing mentorship and guidance. Collaborate with enterprise architects, business analysts and stakeholders to understand data requirements, validate designs and communicate progress. Drive technical innovation within the department to increase code reusability, code quality and developer productivity. Challenge the status quo by bringing the very latest data engineering practices and techniques. Essential Skills and Experience Core Technical Skills Expert in leveraging cloud-based data platform (Snowflake, Databricks) capabilities to create an enterprise lake house. Advanced expertise with AWS ecosystem and experience in using a variety of core AWS data services like Lambda, EMR, MSK, Glue, S3. Experience designing event-based or streaming data architectures using Kafka. Advanced expertise in Python and SQL. Open to expertise in Java/Scala but require enterprise experience of Python. Expert in designing, building and using CI/CD pipelines to deploy infrastructure (Terraform) and pipelines with test automation. Data Security & Performance Optimization:Experience implementing data access controls to meet regulatory requirements. Experience using both RDBMS (Oracle, Postgres, MSSQL) and NOSQL (Dynamo, OpenSearch, Redis) offerings. Experience implementing CDC ingestion. Experience using orchestration tools (Airflow, Control-M, etc..) Bonus technical Skills: Strong experience in containerisation and experience deploying applications to Kubernetes. Strong experience in API development using Python based frameworks like FastAPI. Key Soft Skills: Problem-Solving:Leadership experience in problem-solving and technical decision-making. Communication:Strong in strategic communication and stakeholder engagement. Project Management:Experienced in overseeing project lifecycles working with Project Managers to manage resources.

Posted 3 weeks ago

Apply

6.0 - 8.0 years

8 - 10 Lacs

Pune

Work from Office

Naukri logo

Hello Visionary! We empower our people to stay resilient and relevant in a constantly changing world. Were looking for people who are always searching for creative ways to grow and learn. People who want to make a real impact, now and in the future. Does that sound like you? Then it seems like youd make a great addition to our vibrant team. Siemens founded the new business unit Siemens Foundational Technologies (formerly known as Siemens IoT Services) on April 1, 2019 with its headquarter in Munich, Germany. It has been crafted to unlock the digital future of its clients by offering end-to-end support on their outstanding digitalization journey. Siemens Foundational Technologies is a strategic advisor and a trusted implementation partner in digital transformation and industrial IoT with a global network of more than 8000 employees in 10 countries and 21 offices. Highly skilled and experienced specialists offer services which range from consulting to craft & prototyping to solution & implementation and operation everything out of one hand. We are looking for a Senior DevOps Engineer Youll make a difference by: Key Responsibilities: Design, implement, and maintain CI/CD pipelines using GitLab, including configuring GitLab Runners. Build, manage, and scale containerized applications using Docker, Kubernetes, and HELM. Automate infrastructure provisioning and management with Terraform. Manage and optimize cloud-based environments, especially AWS. Administer and optimize Kafka clusters for data streaming and processing. Oversee the performance and reliability of databases and Linux environments. Monitor and enhance system health using tools like Prometheus and Grafana. Collaborate with cross-functional teams to implement DevOps best practices. Ensure system security, scalability, and disaster recovery readiness. Troubleshoot and resolve technical issues across the infrastructure. Required Skills & Qualifications: 6 - 8 years of experience in DevOps, system administration, or a related role. Expertise in CI/CD tools and workflows, especially GitLab Pipelines and GitLab Runners. Proficient in containerization and orchestration tools like Docker, Kubernetes, and HELM. Strong hands-on experience with Docker Swarm, including creating and managing Docker clusters. Proficiency in packaging Docker images for deployment. Strong hands-on experience with Kubernetes, including managing clusters and deploying applications. Strong hands-on experience with Terraform for Infrastructure as Code (IaC). In-depth knowledge of AWS services, including EC2, S3, IAM, EKS, MSK, Route53 and VPC. Solid experience in managing and maintaining Kafka ecosystems. Strong Linux system administration skills. Proficiency in database management, optimization, and troubleshooting. Experience with monitoring tools like Prometheus and Grafana. Excellent scripting skills in languages like Bash, Python. Strong problem-solving skills and the ability to work in a fast-paced environment. Excellent communication skills and a collaborative mindset. Good to Have Skills: Experience with Keycloak for identity and access management. Familiarity with Nginx or Traefik for reverse proxy and load balancing. Hands-on experience in PostgreSQL maintenance, including backups, tuning, and troubleshooting. Knowledge of the railway domain, including industry-specific challenges and standards. Experience in implementing and managing high-availability architectures. Exposure to distributed systems and microservices architecture. Desired Skills: 5-8 years of experience is required. Great Communication skills. Analytical and problem-solving skills This role is based in Pune and is an Individual contributor role. You might be required to visit other locations within India and outside. In return, you'll get the chance to work with teams impacting - and the shape of things to come.

Posted 3 weeks ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Pune

Work from Office

Naukri logo

Youll make a difference by: Key Responsibilities: Design, implement, and maintain CI/CD pipelines using GitLab, including configuring GitLab Runners. Build, manage, and scale containerized applications using Docker, Kubernetes, and HELM. Automate infrastructure provisioning and management with Terraform. Manage and optimize cloud-based environments, especially AWS. Administer and optimize Kafka clusters for data streaming and processing. Oversee the performance and reliability of databases and Linux environments. Monitor and enhance system health using tools like Prometheus and Grafana. Collaborate with cross-functional teams to implement DevOps best practices. Ensure system security, scalability, and disaster recovery readiness. Troubleshoot and resolve technical issues across the infrastructure. Required Skills & Qualifications: 3 - 5 years of experience in DevOps, system administration, or a related role. Expertise in CI/CD tools and workflows, especially GitLab Pipelines and GitLab Runners. Proficient in containerization and orchestration tools like Docker, Kubernetes, and HELM. Strong hands-on experience with Docker Swarm, including creating and managing Docker clusters. Proficiency in packaging Docker images for deployment. Strong hands-on experience with Kubernetes, including managing clusters and deploying applications. Strong hands-on experience with Terraform for Infrastructure as Code (IaC). In-depth knowledge of AWS services, including EC2, S3, IAM, EKS, MSK, Route53 and VPC. Solid experience in managing and maintaining Kafka ecosystems. Strong Linux system administration skills. Proficiency in database management, optimization, and troubleshooting. Experience with monitoring tools like Prometheus and Grafana. Excellent scripting skills in languages like Bash, Python. Strong problem-solving skills and the ability to work in a fast-paced environment. Excellent communication skills and a collaborative mindset. Good to Have Skills: Experience with Keycloak for identity and access management. Familiarity with Nginx or Traefik for reverse proxy and load balancing. Hands-on experience in PostgreSQL maintenance, including backups, tuning, and troubleshooting. Knowledge of the railway domain, including industry-specific challenges and standards. Experience in implementing and managing high-availability architectures. Exposure to distributed systems and microservices architecture. Desired Skills: 3-5 years of experience is required. Great Communication skills. Analytical and problem-solving skills Find out more about Siemens careers at: & more about mobility at

Posted 3 weeks ago

Apply

10 - 13 years

27 - 32 Lacs

Bengaluru

Work from Office

Naukri logo

Department: ISS Reports To: Head of Data Platform - ISS Grade : 7 Were proud to have been helping our clients build better financial futures for over 50 years. How have we achieved this? By working together - and supporting each other - all over the world. So, join our team and feel like youre part of something bigger. Department Description ISS Data Engineering Chapter is an engineering group comprised of three sub-chapters - Data Engineers, Data Platform and Data Visualisation that supports the ISS Department. Fidelity is embarking on several strategic programmes of work that will create a data platform to support the next evolutionary stage of our Investment Process.These programmes span across asset classes and include Portfolio and Risk Management, Fundamental and Quantitative Research and Trading. Purpose of your role This role sits within the ISS Data Platform Team. The Data Platform team is responsible for building and maintaining the platform that enables the ISS business to operate. This role is appropriate for a Lead Data Engineer capable of taking ownership and a delivering a subsection of the wider data platform. Key Responsibilities Design, develop and maintain scalable data pipelines and architectures to support data ingestion, integration and analytics. Be accountable for technical delivery and take ownership of solutions. Lead a team of senior and junior developers providing mentorship and guidance. Collaborate with enterprise architects, business analysts and stakeholders to understand data requirements, validate designs and communicate progress. Drive technical innovation within the department to increase code reusability, code quality and developer productivity. Challenge the status quo by bringing the very latest data engineering practices and techniques. Essential Skills and Experience Core Technical Skills Expert in leveraging cloud-based data platform (Snowflake, Databricks) capabilities to create an enterprise lake house. Advanced expertise with AWS ecosystem and experience in using a variety of core AWS data services like Lambda, EMR, MSK, Glue, S3. Experience designing event-based or streaming data architectures using Kafka. Advanced expertise in Python and SQL. Open to expertise in Java/Scala but require enterprise experience of Python. Expert in designing, building and using CI/CD pipelines to deploy infrastructure (Terraform) and pipelines with test automation. Data Security & Performance Optimization:Experience implementing data access controls to meet regulatory requirements. Experience using both RDBMS (Oracle, Postgres, MSSQL) and NOSQL (Dynamo, OpenSearch, Redis) offerings. Experience implementing CDC ingestion. Experience using orchestration tools (Airflow, Control-M, etc..) Bonus technical Skills: Strong experience in containerisation and experience deploying applications to Kubernetes. Strong experience in API development using Python based frameworks like FastAPI. Key Soft Skills: Problem-Solving:Leadership experience in problem-solving and technical decision-making. Communication:Strong in strategic communication and stakeholder engagement. Project Management:Experienced in overseeing project lifecycles working with Project Managers to manage resources.

Posted 2 months ago

Apply

5 - 10 years

15 - 30 Lacs

Chennai, Delhi NCR, Bengaluru

Work from Office

Naukri logo

Minimum 7 years of proven experience managing AWS MSK(Kafka) and RabbitMQ. Has extensively worked with AWS at scale Experience with IaC tools like Terraform for infrastructure provsioning and management Experience with monitoring tools (e.g., CloudWatch, Datadog, Prometheus). Scripting and automation skills (Python, Bash) Excellent communication and collaboration skills.

Posted 2 months ago

Apply

5 - 10 years

7 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

Title: Principal Data Engineer (Associate Director) Department: ISS Reports To: Head of Data Platform - ISS Grade : 7 Were proud to have been helping our clients build better financial futures for over 50 years. How have we achieved thisBy working together - and supporting each other - all over the world. So, join our team and feel like youre part of something bigger. Department Description ISS Data Engineering Chapter is an engineering group comprised of three sub-chapters - Data Engineers, Data Platform and Data Visualisation that supports the ISS Department. Fidelity is embarking on several strategic programmes of work that will create a data platform to support the next evolutionary stage of our Investment Process. These programmes span across asset classes and include Portfolio and Risk Management, Fundamental and Quantitative Research and Trading. Purpose of your role This role sits within the ISS Data Platform Team. The Data Platform team is responsible for building and maintaining the platform that enables the ISS business to operate. This role is appropriate for a Lead Data Engineer capable of taking ownership and a delivering a subsection of the wider data platform. Key Responsibilities Design, develop and maintain scalable data pipelines and architectures to support data ingestion, integration and analytics. Be accountable for technical delivery and take ownership of solutions. Lead a team of senior and junior developers providing mentorship and guidance. Collaborate with enterprise architects, business analysts and stakeholders to understand data requirements, validate designs and communicate progress. Drive technical innovation within the department to increase code reusability, code quality and developer productivity. Challenge the status quo by bringing the very latest data engineering practices and techniques. Essential Skills and Experience Core Technical Skills Expert in leveraging cloud-based data platform (Snowflake, Databricks) capabilities to create an enterprise lake house. Advanced expertise with AWS ecosystem and experience in using a variety of core AWS data services like Lambda, EMR, MSK, Glue, S3. Experience designing event-based or streaming data architectures using Kafka. Advanced expertise in Python and SQL. Open to expertise in Java/Scala but require enterprise experience of Python. Expert in designing, building and using CI/CD pipelines to deploy infrastructure (Terraform) and pipelines with test automation. Data Security Performance Optimization: Experience implementing data access controls to meet regulatory requirements. Experience using both RDBMS (Oracle, Postgres, MSSQL) and NOSQL (Dynamo, OpenSearch, Redis) offerings. Experience implementing CDC ingestion. Experience using orchestration tools (Airflow, Control-M, etc..) Bonus technical Skills: Strong experience in containerisation and experience deploying applications to Kubernetes. Strong experience in API development using Python based frameworks like FastAPI. Key Soft Skills: Problem-Solving: Leadership experience in problem-solving and technical decision-making. Communication: Strong in strategic communication and stakeholder engagement. Project Management: Experienced in overseeing project lifecycles working with Project Managers to manage resources.

Posted 3 months ago

Apply

10 - 15 years

50 - 60 Lacs

Hyderabad

Remote

Naukri logo

Principal Engineer (WFH) Experience: 8 - 15 Years Salary: INR 50,00,000-60,00,000 / year Preferred Notice Period : within 15 days Shift : 10:00 AM to 7:00 PM IST Opportunity Type: Remote Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Partners) What do you need for this opportunity? Must have skills required : Data structure and algorithm, Go, Microservices Architecture, AWS, JavaScript, Python, System Design, TypeScript Good to have skills : B2B, HRTECH, SaaS Our Hiring Partner is Looking for: Principal Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description What You'll Be Doing The company is looking to hire a Principal Engineer to provide hands-on technical leadership for the entire product engineering team operating in India. In this position, you'll work on leading the analytics, integrations, application platform, and product line of a complex enterprise SaaS product with your U.S. counterparts. Develop and enhance a complex enterprise performance management SAAS platform to drive critical decision-making in large enterprises. Architect, test, and implement solutions that elevate the company product to an enterprise level. Identify technology, process, and skill gaps and work with the U.S. and India heads of engineering to address them. Mentor a team of senior and staff engineers. Collaborate with a cross-functional team including engineering managers, product managers, designers, QA and other stakeholders to convert business requirements into product and technology outcomes. Introduce the team to new technologies and represent The company's technology via external events, open-source contributions, and blog posts. Participate in discussions with fully remote colleagues across multiple time zones (inclusive of Europe & US). What'll Help You Be Successful Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 8 to 15 years of experience in large-scale enterprise software development and architecture. A burning passion for technology, specifically technology in the service of business. A strong understanding of and respect for disciplines outside engineering - product, UX/UI design, sales, and marketing. Hands-on full-stack development experience with any high-level programming language. Python, Typescript, or Go are preferred. Expertise in AWS cloud, specifically EKS, RDS, MSK, and/or MWAA. Experience working with mid-sized teams of 50-100 engineers. Experience working with distributed computing and teams. Ability to define problems and resolve unknowns independently. Highly self-directed. Ability to communicate design and communicate large scale software architectures and guide teams into implementing them incrementally. Highly disciplined and self-motivated. What We All Do All employees share the responsibility of being aware of information security risks and adhering to information security policies and procedures. All employees are required to participate in information security awareness and training programs. All employees have a responsibility to handle data in accordance with data classification and handling guidelines. Employees should be aware of the sensitivity of the data they interact with and follow appropriate security measures. All employees have a responsibility of reporting information security incidents in accordance with information security policies and procedures. Life at The company At the company, we prioritize our people. In that spirit, we've put together a great benefits program to support our employees' health and wellness that includes the following: Work closely with a cross functional team of highly motivated and intelligent folks with a unique range of startup and enterprise experience. Balanced Work / Life with unlimited vacation. A vibrant company culture with frequent team building events. Competitive salary with stock options. Company sponsored health and personal accident insurance benefits. A remote first work culture that allows you to work from anywhere in India and travel to meet as a team when possible. A one-time reimbursement for work from the home office set up. A monthly stipend for the internet. Engagement Type Direct-Hire on Clients payroll Job Type - Permanent Location- Remote Work Timings- 10 AM-7 PM IST (Flexible shift timings) How to apply for this opportunity Register or log in on our portal Click 'Apply,' upload your resume, and fill in the required details. Post this, click Apply Now' to submit your application. Get matched and crack a quick interview with our hiring partner. Land your global dream job and get your exciting career started! About Our Hiring Partner: The company provides enterprise software to easily manage strategic plans, collaborative goals (OKRs), and ongoing performance conversations. Company mission is to build solutions that help companies execute their strategic objectives through people engagement, performance enablement and decision analytics. We are working with some of the worlds leading brands like Walmart and Intuit to disrupt the business and talent management spaces with next generation Strategic Execution and Performance Management solutions. About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. You will also be assigned to a dedicated Talent Success Coach during the engagement. ( Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies