Jobs
Interviews

1629 Cloud Platforms Jobs - Page 40

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 5.0 years

7 - 11 Lacs

Bengaluru

Work from Office

HyperVerge Technologies Pvt. Ltd is looking for Senior Fullstack Engineer to join our dynamic team and embark on a rewarding career journey Designing, developing, and deploying scalable and robust web applications, considering both the front-end and back-end aspects of the system Collaborating with cross-functional teams, including product managers, designers, and other developers, to gather requirements and develop technical solutions Building responsive and user-friendly interfaces using HTML, CSS, and JavaScript frameworks such as React, Angular, or Vue js Developing and maintaining server-side applications and APIs using programming languages like Python, Java, Ruby, or Node js Implementing and integrating databases and data storage solutions, ensuring efficient data retrieval and manipulation using technologies like SQL or NoSQL Performing system testing, debugging, and troubleshooting to ensure the quality, performance, and security of the applications Optimizing web applications for maximum speed and scalability, considering factors such as caching, code optimization, and network latency Staying up to date with industry trends, best practices, and emerging technologies, and recommending their adoption to improve the development process Participating in code reviews and providing constructive feedback to maintain code quality and ensure adherence to coding standards Collaborating with DevOps teams to deploy and maintain applications in production environments, ensuring high availability and scalability Requirements:Bachelor's degree in computer science, software engineering, or a related field Equivalent work experience may also be considered Strong proficiency in front-end development technologies such as HTML, CSS, and JavaScript Experience with modern JavaScript frameworks (React, Angular, Vue js) is preferred Proficiency in at least one back-end programming language (e g , Python, Java, Ruby, Node js) and associated frameworks Solid understanding of web development principles, including RESTful APIs, HTTP protocols, and server-side rendering Experience with database systems, both SQL and NoSQL, and understanding of data modeling and query optimization Familiarity with version control systems (e g , Git) and collaborative development workflows (e g , Agile, Scrum) Strong problem-solving and analytical skills, with the ability to identify and resolve technical challenges Excellent communication and teamwork skills to collaborate effectively with cross-functional teams Demonstrated ability to learn new technologies quickly and adapt to evolving development practices Experience with cloud platforms (e g , AWS, Azure, Google Cloud) and containerization technologies (e g , Docker, Kubernetes) is a plus

Posted 1 month ago

Apply

9.0 - 12.0 years

1 - 2 Lacs

Hyderabad

Remote

Job Title: Data Architect Location: Remote Employment Type: Full-Time Reports to: Lead Data Strategist About Client / Project: Client is a specialist data strategy and AI consultancy that empowers businesses to unlock tangible value from their data assets. We specialize in developing comprehensive data strategies tailored to address core business and operational challenges. By combining strategic advisory with hands-on implementation, we ensure data becomes a true driver of business growth, operational efficiency, and competitive advantage for our clients. As a solutions-focused and forward-thinking consultancy, we help organizations transform their data capabilities using modern technology, reduce costs, and accelerate business growth by aligning every initiative directly with our clients core business objectives. Role Overview We are seeking a highly experienced Data Architect to lead the design and implementation of scalable data architectures for global clients across industries. You will define enterprise-grade data platforms leveraging cloud-native technologies and modern data frameworks. Key Responsibilities Design and implement cloud-based data architectures (GCP, AWS, Azure, Snowflake, Redshift, Databricks, or Hadoop)• Develop conceptual, logical, and physical data models Define data flows, ETL/ELT pipelines, and ingestion strategies Design and maintain data catalogs, metadata, and domain structures Establish data architecture standards, reference models, and blueprints Oversee data lineage, traceability, and audit readiness Guide integration of AI/ML pipelines and analytics solutions Ensure data privacy, protection, and compliance (e.g., GDPR, HIPAA) Collaborate closely with Engineers, Analysts, and Strategists Required Skills & Qualifications 8+ years of experience in data architecture or enterprise data platform roles Deep experience with at least two major cloud platforms (AWS, Azure, GCP) Proven hands-on work with modern data platforms: Snowflake, Databricks, Redshift, Hadoop Strong understanding of data warehousing, data lakes, lakehouse architecture Advanced proficiency in SQL, Python, Spark, and/or Scala Experience with data cataloging and metadata tools (e.g., Informatica, Collibra, Alation) Knowledge of data governance frameworks and regulatory compliance Strong documentation, stakeholder communication, and architectural planning skills Bachelor’s degree in Computer Science, Engineering, or related field (Master’s preferred)

Posted 1 month ago

Apply

4.0 - 8.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Do you want to help solve the world's most pressing challengesFeeding the world's growing population and slowing climate change are two of the world's greatest challenges AGCO is a part of the solution! Join us to make your contribution AGCO is looking to hire candidates for the position of Senior Manager, AI & Data Systems Architecture We are seeking an experienced and innovative Senior Manager, AI & Data Systems Architecture to lead the design, creation, and evolution of system architectures for AI, analytics, and data systems within our organization The ideal candidate will have extensive experience delivering scalable, high-performance data and AI architectures across cloud platforms such as AWS, Google Cloud Platform, and Databricks, with a proven ability to align technology solutions with business goals This individual will collaborate with cross-functional teams, including data engineers, data scientists, and other IT professionals to create architectures that support cutting-edge AI and data initiatives, driving efficiency, scalability, and innovation Your Impact Architecture Leadership: Lead the end-to-end architecture for AI and data systems, ensuring cost-effective scalability, performance, and security across cloud and on-premises environments The goal is to build and support a modern data stack AI & Data Systems: Design, implement, and manage data infrastructure and AI platforms, including but not limited to AWS, Azure, Google Cloud Platform, Databricks, and other key data tools Lead the data model approach for all data products and solutions Cloud Expertise: Champion cloud adoption strategies, optimizing data pipelines, analytics workloads, and AI/ML model deployment, end point creation and app integration System Evolution: Drive the continuous improvement and evolution of data and AI architectures to meet emerging business needs, technological advancements, and industry trends Collaboration & Leadership: Work closely with delivery teams, data engineers, data scientists, software engineers, and IT operations to implement comprehensive data architectures that support AI and analytics initiatives focused on continuous improvement Strategic Vision: Partner with business and technology stakeholders to understand long-term goals, translating them into architectural frameworks and roadmaps that drive business value Governance & Best Practices: Ensure best practices in data governance, security, and compliance, overseeing the implementation of standards across AI and data systems Performance Optimization: Identify opportunities to optimize performance, cost-efficiency, and operational effectiveness of AI and data systems including ETL, ELT and data pipeline creation and evolution and optimizing of AI resource models Functional Knowledge Experience: 10+ years of experience in data architecture, AI systems, or cloud infrastructure, with at least 3-5 years in a leadership role Proven experience driving solutions from ideation to delivery and support Cloud Expertise: Deep hands-on experience with cloud platforms like AWS, Google Cloud Platform (GCP), and Databricks Familiarity with other data and AI platforms is a plus CRM Expertise: Hands-on experience with key CRM systems like Salesforce and AI systems inside of those solutions (ex Einstein) AI & Analytics Systems: Proven experience designing architectures for AI, machine learning, analytics, and large-scale data processing systems Technical Knowledge: Expertise in data architecture, including data lakes, data warehouses, real-time data streaming, and batch processing frameworks Cross-Platform Knowledge: Solid understanding of containerization (Docker, Kubernetes), infrastructure as code (Terraform, CloudFormation), and big data ecosystems (Spark, Hadoop) Experience in applying Agile methodologies, including Scrum, Kanban or SAFe Experience in top reporting solutions, including/preferred Tableau which is one of our cornerstone reporting solutions Leadership: Strong leadership and communication skills, with the ability to drive architecture initiatives in a collaborative and fast-paced environment Excellent problem solving skills and a proactive mindset Education: Bachelors degree in Computer Science, Data Science, or related field Masters degree or relevant certifications (e g , AWS Certified Solutions Architect) is preferred Business Expertise Experience in industries such as manufacturing, agriculture, or supply chain, particularly in AI and data use cases Familiarity with regulatory requirements related to data governance and security Experience with emerging technologies like edge computing, IoT, and AI/ML automation tools Your Experience And Qualifications Excellent communication / interpersonal skills, capable of interacting with multiple levels of IT and business management/leadership Hands on experience with SAP Hana, SAP Data Services or similar data storage, warehousing and/or ETL solutions 10+ years of progressive IT experience Experience creating data models, querying data, business process and technical process mapping Successfully influences diverse groups and teams in a complex, ambiguous and rapidly changing environment to deliver value-added solutions Effective working relationship with the business to ensure business requirements are accurately captured, agreed, and accepted Adaptable to new technologies/practices and acts as change agent within teams Your Benefits GLOBAL DIVERSITY Diversity means many things to us, different brands, cultures, nationalities, genders, generations even variety in our roles You make us unique! ENTERPRISING SPIRITEvery role adds value We're committed to helping you develop and grow to realize your potential POSITIVE IMPACT Make it personal and help us feed the world INNOVATIVE TECHNOLOGIES You can combine your love for technology with manufacturing excellence and work alongside teams of people worldwide who share your enthusiasm MAKE THE MOST OF YOU Benefits include health care and wellness plans and flexible and virtual work option??? Your Workplace AGCO is Great Place to Work Certified and has been recognized for delivering exceptional employee experience and a positive workplace culture We value inclusion and recognize the innovation a diverse workforce delivers to our farmers Through our recruiting, we are committed to building a team that includes a variety of experiences, backgrounds, cultures and perspectives We value inclusion and recognize the innovation a diverse workforce delivers to our farmers Through our recruitment efforts, we are committed to building a team that includes a variety of experiences, backgrounds, cultures and perspectives Join us as we bring agriculture into the future and apply now! Please note that this job posting is not designed to cover or contain a comprehensive listing of all required activities, duties, responsibilities, or benefits and may change at any time with or without notice AGCO is proud to be an Equal Opportunity Employer

Posted 1 month ago

Apply

5.0 - 8.0 years

14 - 19 Lacs

Noida

Work from Office

Site Reliability Engineer Site Reliability Engineers at UKG are critical team members that have a breadth of knowledge encompassing all aspects of service delivery. They develop software solutions to enhance, harden and support our service delivery processes. This can include building and managing CI/CD deployment pipelines, automated testing, capacity planning, performance analysis, monitoring, alerting, chaos engineering and auto remediation. Site Reliability Engineers must be passionate about learning and evolving with current technology trends. They strive to innovate and are relentless in pursuing a flawless customer experience. They have an automate everything mindset, helping us bring value to our customers by deploying services with incredible speed, consistency, and availability. Job Responsibilities: Engage in and improve the lifecycle of services from conception to EOL, including system design consulting, and capacity planning Define and implement standards and best practices related to: System Architecture, Service delivery, metrics and the automation of operational tasks Support services, product & engineering teams by providing common tooling and frameworks to deliver increased availability and improved incident response Improve system performance, application delivery and efficiency through automation, process refinement, postmortem reviews, and in-depth configuration analysis Collaborate closely with engineering professionals within the organization to deliver reliable services Increase operational efficiency, effectiveness, and quality of services by treating operational challenges as a software engineering problem (reduce toil) Guide junior team members and serve as a champion for Site Reliability Engineering Actively participate in incident response, including on-call responsibilities Required Qualifications Engineering degree, or a related technical discipline, or equivalent work experience Experience coding in higher-level languages (e.g., Python, JavaScript, C++, or Java) Knowledge of Cloud based applications & Containerization Technologies Demonstrated understanding of best practices in metric generation and collection, log aggregation pipelines, time-series databases, and distributed tracing Demonstrable fundamentals in 2 of the following: Computer Science, Cloud architecture, Security, or Network Design fundamentals Demonstrable fundamentals in 2 of the following: Computer Science, Cloud architecture, Security, or Network Design fundamentals (Experience, Education, Certification, License and Training) Must have at least 3 years of hands-on experience working in Engineering or Cloud Minimum 2 years' experience with public cloud platforms (e.g. GCP, AWS, Azure) Minimum 2 years' Experience in configuration and maintenance of applications and/or systems infrastructure for large scale customer facing company

Posted 1 month ago

Apply

3.0 - 7.0 years

9 - 14 Lacs

Bengaluru

Work from Office

Job Summary We are seeking a skilled Escalation Engineer with expertise in NetApp ONTAP, data center operations, and storage concepts The ideal candidate will possess a robust technical background in data storage, coupled with extensive experience in providing technical support and leading teams in resolving complex issues This role requires a deep understanding of product sustainability, engineering cycles, and a commitment to delivering exceptional customer service Job Requirements Serve as a subject matter expert in NetApp ONTAP and related storage technologies Lead and coordinate resolution efforts for escalated technical issues, collaborating closely with cross-functional teams Provide advanced troubleshooting and problem-solving expertise to address complex customer issues Conduct in-depth analysis of customer environments to identify root causes and develop effective solutions Actively participate in product sustainability initiatives, including product lifecycle management and engineering cycles Mentor and guide junior team members, fostering a culture of continuous learning and development Communicate effectively with customers, internal stakeholders, and management, both verbally and in writing Document technical solutions, best practices, and knowledge base articles to enhance team efficiency and customer satisfaction Education & Requirements Bachelors degree in Computer Science, Information Technology, or related field Extensive experience for 10+ years in technical support as a Sr Engineer/Principal Engineer, handling escalations preferably in a storage or data center environment In-depth knowledge of NetApp ONTAP and storage concepts such as SAN, NAS, RAID, and replication Strong understanding of data center architectures, virtualization technologies, and cloud platforms Proven track record of leading teams in resolving technical escalations and driving issue resolution Excellent collaboration skills with the ability to work effectively in a cross-functional team environment Exceptional verbal and written communication skills, with the ability to articulate complex technical concepts to both technical and non-technical audiences Demonstrated ability to prioritize and manage multiple tasks in a fast-paced environment Relevant certifications such as NetApp Certified Implementation Engineer (NCIE) or equivalent are a plus At NetApp, we embrace a hybrid working environment designed to strengthen connection, collaboration, and culture for all employees This means that most roles will have some level of in-office and/or in-person expectations, which will be shared during the recruitment process Equal Opportunity Employer NetApp is firmly committed to Equal Employment Opportunity (EEO) and to compliance with all laws that prohibit employment discrimination based on age, race, color, gender, sexual orientation, gender identity, national origin, religion, disability or genetic information, pregnancy, and any protected classification Why NetApp We are all about helping customers turn challenges into business opportunity It starts with bringing new thinking to age-old problems, like how to use data most effectively to run better but also to innovate We tailor our approach to the customer's unique needs with a combination of fresh thinking and proven approaches We enable a healthy work-life balance Our volunteer time off program is best in class, offering employees 40 hours of paid time off each year to volunteer with their favourite organizations We provide comprehensive benefits, including health care, life and accident plans, emotional support resources for you and your family, legal services, and financial savings programs to help you plan for your future We support professional and personal growth through educational assistance and provide access to various discounts and perks to enhance your overall quality of life If you want to help us build knowledge and solve big problems, let's talk

Posted 1 month ago

Apply

5.0 - 10.0 years

7 - 17 Lacs

Hyderabad

Work from Office

We are looking for a skilled DevOps Engineer with 5-10 years of experience to join our team at IDESLABS PRIVATE LIMITED. The ideal candidate will have a strong background in recruitment and staffing, with excellent technical skills. Roles and Responsibility Design and implement scalable infrastructure architectures to support business growth. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain automated testing frameworks to ensure high-quality software delivery. Implement continuous integration and continuous deployment (CI/CD) pipelines using tools like Jenkins or GitLab CI/CD. Troubleshoot and resolve complex technical issues efficiently. Ensure compliance with industry standards and best practices for security and scalability. Job Requirements Strong understanding of DevOps principles and practices, including automation, monitoring, and feedback loops. Experience with cloud platforms such as AWS or Azure is required. Proficiency in programming languages like Python or Java is necessary. Excellent problem-solving skills and the ability to work under pressure. Strong communication and collaboration skills are essential. Ability to adapt to changing priorities and deadlines in a fast-paced environment.

Posted 1 month ago

Apply

10.0 - 15.0 years

15 - 25 Lacs

Kolkata, Hyderabad, Bengaluru

Hybrid

Experience: 10+ Years Job Description: Role Overview: We are seeking an experienced AWS Data & Analytics Architect with a strong background in delivery and excellent communication skills. The ideal candidate will have over 10 years of experience and a proven track record in managing teams and client relationships. You will be responsible for leading data modernization and transformation projects using AWS services. Key Responsibilities: Lead and architect data modernization/transformation projects using AWS services. Manage and mentor a team of data engineers and analysts. Build and maintain strong client relationships, ensuring successful project delivery. Design and implement scalable data architectures and solutions. Oversee the migration of large datasets to AWS, ensuring data integrity and security. Collaborate with stakeholders to understand business requirements and translate them into technical solutions. Ensure best practices in data management and governance are followed. Required Skills and Experience: 10+ years of experience in data architecture and analytics. Hands-on experience with AWS services such as Redshift, S3, Glue, Lambda, RDS, and others. Proven experience in delivering 1-2 large data migration/modernization projects using AWS. Strong leadership and team management skills. Excellent communication and interpersonal skills. Deep understanding of data modeling, ETL processes, and data warehousing. Experience with data governance and security best practices. Ability to work in a fast-paced, dynamic environment. Preferred Qualifications: AWS Certified Solutions Architect Professional or AWS Certified Big Data Specialty. Experience with other cloud platforms (e.g., Azure, GCP) is a plus. Familiarity with machine learning and AI technologies.

Posted 1 month ago

Apply

3.0 - 5.0 years

9 - 13 Lacs

Indore, Pune, Chennai

Work from Office

What will your role look like Perform both Manual and Automated testing of software applications. Write and maintain test scripts using Selenium with Java. Troubleshoot, debug, and resolve software defects, ensuring high-quality software delivery. Participate in test case design, reviewing requirements, and ensuring comprehensive test coverage. Collaborate with development and product teams to understand the product features and ensure quality. Continuously improve testing practices and processes. Work with cloud platforms such as AWS, Azure, or GCP for testing and deployment tasks. Utilize Terraform for automating infrastructure as code. Ensure the application meets all defined functional and performance standards before release. Stay updated with the latest industry trends, tools, and technologies. Why you will love this role Besides a competitive package, an open workspace full of smart and pragmatic team members, with ever-growing opportunities for professional and personal growth Be a part of a learning culture where teamwork and collaboration are encouraged, diversity is valued and excellence, compassion, openness and ownership is rewarded We would like you to bring along Experience in both Manual and Automation Testing Hands-on experience with Java Strong proficiency in Selenium with Java Excellent debugging skills Experience working with at least one cloud platform (AWS/Azure/GCP) In-depth knowledge of the Software Testing Life Cycle (STLC) processes Practical experience with Terraform Familiarity with the Storage domain is a plus Location - Chennai,Indore,Pune,Vadodara

Posted 1 month ago

Apply

7.0 - 10.0 years

7 - 17 Lacs

Bengaluru

Work from Office

About this role: Wells Fargo is seeking a Principal Engineer. In this role, you will: Act as an advisor to leadership to develop or influence applications, network, information security, database, operating systems, or web technologies for highly complex business and technical needs across multiple groups Lead the strategy and resolution of highly complex and unique challenges requiring in-depth evaluation across multiple areas or the enterprise, delivering solutions that are long-term, large-scale and require vision, creativity, innovation, advanced analytical and inductive thinking Translate advanced technology experience, an in-depth knowledge of the organizations tactical and strategic business objectives, the enterprise technological environment, the organization structure, and strategic technological opportunities and requirements into technical engineering solutions Provide vision, direction and expertise to leadership on implementing innovative and significant business solutions Maintain knowledge of industry best practices and new technologies and recommends innovations that enhance operations or provide a competitive advantage to the organization Strategically engage with all levels of professionals and managers across the enterprise and serve as an expert advisor to leadership Required Qualifications: 7+ years of Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Job Expectation: Lead the design and architecture of scalable, secure, and high-performance systems for fraud and claims technology. Drive end-to-end project delivery, ensuring accountability and clarity throughout the lifecycle. Collaborate with cross-functional teams to define technical strategy and roadmap. Mentor and guide engineers, fostering a culture of technical excellence and innovation. Ensure best practices in coding, system design, and platform engineering. Communicate complex technical concepts clearly to stakeholders at all levels. Desired Skills Deep expertise in cloud platforms, virtualization (VM), OpenShift Container Platform (OCP), and system design/architecture. Strong coding skills in Java, Python, C, and familiarity with other programming languages. Demonstrated leadership capabilities and experience driving projects end-to-end. Exceptional communication skills and clarity of thought. Experience with Agile methodologies and product operating models. Experience with Pega or similar BPM tools. Experience leading small engineering teams.

Posted 1 month ago

Apply

4.0 - 7.0 years

10 - 15 Lacs

Bengaluru

Work from Office

At Allstate, great things happen when our people work together to protect families and their belongings from lifes uncertainties. And for more than 90 years our innovative drive has kept us a step ahead of our customers evolving needs. From advocating for seat belts, air bags and graduated driving laws, to being an industry leader in pricing sophistication, telematics, and, more recently, device and identity protection. The GenAI Cloud Engineer Engineer is a full stack engineer who builds and operates the cloud application development and hosting platforms for Allstate. This role will have the primary accountability of owning, developing, implementing and operating GenAI Cloud platforms. This role will also encompass developing, building, administering, and deploying self-service tools that enable Allstate developers to build, deploy and operate artificial intelligence applications to solve our most complex business challenges. As a GenAI Engineer, they will be part of an engineering team primarily working in a paired programming team, collaborating with different team members. They will split time evenly in executing operational tasks to maintain the platform and servicing customer requests; and engineering new solutions to automate the build and operational tasks. They will serve as pair anchors being advocates of paired programming, test driven development, infrastructure engineering, and continuous delivery on the team. Key Responsibilities Serves as an anchor to enable Digital product team to GenAI analytics Platform. This includes delivering product and solution briefings, creating demos, executing proof-of-concept projects, and collaborating directly with product management to prioritize solutions that drive consumer adoption of Azure AI Foundry, AI Services, OpenAI, Agentic AI, AWS Sagemaker rand AWS Bedrock. Writes and builds continuous delivery pipelines to manage and automate the lifecycle of the different platform components Builds, manages and operates the infrastructure as a service layer (hosted and cloud-based platforms) that supports the different platform services Leads post mortem activities to identify systemic solutions to improve the overall operations of the platform, and recommends and improves technology related policies and procedures Identifies and troubleshoots any availability and performance issues at multiple layers of deployment, from hardware, operating environment, network and application Evaluates performance trends and expected changes in demand and capacity; and establish the appropriate scalability plans Integrates different components and develops new services with a focus on open source to allow a minimal friction developer interaction with the platform and application services Builds, manages and operates the infrastructure and configuration of the platform infrastructure and application environments with a focus on automation and infrastructure as code Maintain and enhance existing Terraform codebase.Education 4 year Bachelors Degree (Preferred) Experience 3 or more years of experience (Preferred) Knowledge of Prompt Engineering, Agentic and AI open-source ecosystem. Experience and skills with Azure Application and AI services. Experience with AWS bedrock(Preferred) Knowledge of consuming large language model (LLM) and Foundational model APIs. Experience in Terraform development. Skills Cloud PlatformsAzure, AWS Generative AI Programming LanguagesPython Infrastructure as Code (IaC)Terraform (including Tofu/Env0 tools) Primary Skills Shift Time Shift B (India) Recruiter Info Shriya Kumariskuow@allstate.com About Allstate The Allstate Corporation is one of the largest publicly held insurance providers in the United States. Ranked No. 84 in the 2023 Fortune 500 list of the largest United States corporations by total revenue, The Allstate Corporation owns and operates 18 companies in the United States, Canada, Northern Ireland, and India. Allstate India Private Limited, also known as Allstate India, is a subsidiary of The Allstate Corporation. The India talent center was set up in 2012 and operates under the corporations Good Hands promise. As it innovates operations and technology, Allstate India has evolved beyond its technology functions to be the critical strategic business services arm of the corporation. With offices in Bengaluru and Pune, the company offers expertise to the parent organizations business areas including technology and innovation, accounting and imaging services, policy administration, transformation solution design and support services, transformation of property liability service design, global operations and integration, and training and transition. Learn more about Allstate India .

Posted 1 month ago

Apply

10.0 - 15.0 years

7 - 12 Lacs

Pune

Work from Office

In this role, as a subject matter expert, you will be the key player in our transformation and improvement programs. You will support us in connecting the dots between the digital world and core finance processes. This will require a thorough understanding of business processes, best practices, the latest developments, and benefits new tools can bring to VI. Your department and scope of activities The scope of your role is global. Hierarchically, you will be part of the Global Transformation Office based in Veghel, the Netherlands and will report into the Global Process Owner Record-to-Report, who is leading transformation and change. We foster a flexible yet critical approach, emphasizing an end-to-end mindset, deep process knowledge, and a strong understanding of the business. We are expected to be highly skilled professionals with a deep understanding of finance, business, and technology. The role requires a combination of strategic thinking, analytical skills, and technical knowledge to design and implement solutions that support the organization's financial objectives Your role & responsibilities Process Focus: Advisor to a broad range of Stakeholders both in and outside finance. Process Improvement: Drive standardization and initiate improvements within Record to Report, using end-to-end expertise to enhance processes and tools. Cross-Functional Guidance: Provide expertise on Record to Report processes and offer guidance to related areas like Source to Pay, Lead to Cash, and Hire to Retire. KPI Management: Monitor and drive performance based on defined KPIs. Technology Focus: Finance Architecture: Contribute to developing and managing finance architecture, including processes, systems, and data, to align with business goals. Solution Implementation: Collaborate with IT and cross-functional teams to deliver technically sound, sustainable financial solutions. Change & Risk Management: Stay updated on new technological developments, manage architecture changes, and advise on priorities and risks. Continuous improvement focus: Identify, evaluate and drive opportunities for process optimization. General Global Alignment: Collaborate with global teams, including peers in the US and India, on Record to Report transformation projects. Qualifications Education: Master's degree in finance, Accounting, Business, or a related field (MBA or relevant certifications preferred); Experience: At least 10 years of working experience in record to report; Experience with financial systems and processes, especially with modern ERP / EPM solutions (e.g., Oracle Cloud EPM/ERP, SAP); Proven success in leading or participating in transformational finance projects, ideally in a global, multi-entity organization; Experienced in analyzing, redesigning, and implementing finance processes using best practices, with exposure to modern digital tools like Cloud platforms, AI, RPA, and Power Automate being a plus. Skills: Strong analytical and problem-solving skills; Exceptional communication skills, capable of explaining complex concepts to both technical and non-technical stakeholders; Excellent interpersonal skills, confident in building lasting business relationships; You have a result-oriented mindset, are independent, pro-active, innovative and take ownership; Proficient in implementing continuous improvement methodologies such as PDCA, Kaizen, and Lean principles to drive operational excellence; Be fluent in English (written and verbal)

Posted 1 month ago

Apply

7.0 - 10.0 years

10 - 15 Lacs

Bengaluru

Work from Office

Job Title: Senior Engineer | Java and Big Data Company Name: Impetus Technologies Job Description: Impetus Technologies is seeking a skilled Senior Engineer with expertise in Java and Big Data technologies. As a Senior Engineer, you will be responsible for designing, developing, and deploying scalable data processing applications using Java and Big Data frameworks. Your role will involve collaborating with cross-functional teams to gather requirements, developing high-quality code, and optimizing data processing workflows. You will also mentor junior engineers and contribute to architectural decisions to enhance the performance and scalability of our systems. Key Responsibilities: - Design, develop, and maintain high-performance applications using Java and Big Data technologies. - Implement data ingestion and processing workflows utilizing frameworks like Hadoop and Spark. - Collaborate with the data architecture team to define data models and ensure efficient data storage and retrieval. - Optimize existing applications for performance, scalability, and reliability. - Mentor and guide junior engineers, providing technical leadership and fostering a culture of continuous improvement. - Participate in code reviews and ensure best practices for coding, testing, and documentation are followed. - Stay current with technology trends in Java and Big Data, and evaluate new tools and methodologies to enhance system capabilities. Skills and Tools Required: - Strong proficiency in Java programming language with experience in building complex applications. - Hands-on experience with Big Data technologies such as Apache Hadoop, Apache Spark, and Apache Kafka. - Understanding of distributed computing concepts and technologies. - Experience with data processing frameworks and libraries, including MapReduce and Spark SQL. - Familiarity with database systems such as HDFS, NoSQL databases (like Cassandra or MongoDB), and SQL databases. - Strong problem-solving skills and the ability to troubleshoot complex issues. - Knowledge of version control systems like Git, and familiarity with CI/CD pipelines. - Excellent communication and teamwork skills to collaborate effectively with peers and stakeholders. - A bachelor’s or master’s degree in Computer Science, Engineering, or a related field is preferred. Roles and Responsibilities About the Role: - You will be responsible for designing and developing scalable Java applications to handle Big Data processing. - Your role will involve collaborating with cross-functional teams to implement innovative solutions that align with business objectives. - You will also play a key role in ensuring code quality and performance through best practices and testing methodologies. About the Team: - You will work with a diverse team of skilled engineers, data scientists, and product managers who are passionate about technology and innovation. - The team fosters a collaborative environment where knowledge sharing and continuous learning are encouraged. - Regular brainstorming sessions and technical workshops will provide opportunities to enhance your skills and stay updated with industry trends. You are Responsible for: - Developing and maintaining high-performance Java applications that process large volumes of data efficiently. - Implementing data integration and processing frameworks using Big Data technologies such as Hadoop and Spark. - Troubleshooting and optimizing existing systems to improve performance and scalability. To succeed in this role – you should have the following: - Strong proficiency in Java and experience with Big Data technologies and frameworks. - Solid understanding of data structures, algorithms, and software design principles. - Excellent problem-solving skills and the ability to work independently as well as part of a team. - Familiarity with cloud platforms and distributed computing concepts is a plus.

Posted 1 month ago

Apply

10.0 - 15.0 years

7 - 11 Lacs

Noida

Work from Office

R1 RCM India is proud to be recognized amongst India's Top 50 Best Companies to Work For TM 2023 by Great Place To Work Institute. We are committed to transform the healthcare industry with our innovative revenue cycle management services. Our goal is to make healthcare simpler and enable efficiency for healthcare systems, hospitals, and physician practices. With over 30,000 employees globally, we are about 14,000 strong in India with offices in Delhi NCR, Hyderabad, Bangalore, and Chennai. Our inclusive culture ensures that every employee feels valued, respected, and appreciated with a robust set of employee benefits and engagement activities. Description : We are seeking a highly skilled and motivated Data Cloud Architect to join our Product and technology team. As a Data Cloud Architect, you will play a key role in designing and implementing our cloud-based data architecture, ensuring scalability, reliability, and optimal performance for our data-intensive applications. Your expertise in cloud technologies, data architecture, and data engineering will drive the success of our data initiatives. Responsibilities: Collaborate with cross-functional teams, including data engineers, data leads, product owner and stakeholders, to understand business requirements and data needs. Design and implement end-to-end data solutions on cloud platforms, ensuring high availability, scalability, and security. Architect delta lakes, data lake, data warehouses, and streaming data solutions in the cloud. Evaluate and select appropriate cloud services and technologies to support data storage, processing, and analytics. Develop and maintain cloud-based data architecture patterns and best practices. Design and optimize data pipelines, ETL processes, and data integration workflows. Implement data security and privacy measures in compliance with industry standards. Collaborate with DevOps teams to deploy and manage data-related infrastructure on the cloud. Stay up-to-date with emerging cloud technologies and trends to ensure the organization remains at the forefront of data capabilities. Provide technical leadership and mentorship to data engineering teams. Qualifications: Bachelors degree in computer science, Engineering, or a related field (or equivalent experience). 10 years of experience as a Data Architect, Cloud Architect, or in a similar role. Expertise in cloud platforms such as Azure. Strong understanding of data architecture concepts and best practices. Proficiency in data modeling, ETL processes, and data integration techniques. Experience with big data technologies and frameworks (e.g., Hadoop, Spark). Knowledge of containerization technologies (e.g., Docker, Kubernetes). Familiarity with data warehousing solutions (e.g., Redshift, Snowflake). Strong knowledge of security practices for data in the cloud. Excellent problem-solving and troubleshooting skills. Effective communication and collaboration skills. Ability to lead and mentor technical teams. Additional Preferred Qualifications: Bachelors degree / Master's degree in Data Science, Computer Science, or related field. Relevant cloud certifications (e.g., Azure Solutions Architect) and data-related certifications. Experience with real-time data streaming technologies (e.g., Apache Kafka). Knowledge of machine learning and AI concepts in relation to cloud-based data solutions. Working in an evolving healthcare setting, we use our shared expertise to deliver innovative solutions. Our fast-growing team has opportunities to learn and grow through rewarding interactions, collaboration and the freedom to explore professional interests. Our associates are given valuable opportunities to contribute, to innovate and create meaningful work that makes an impact in the communities we serve around the world. We also offer a culture of excellence that drives customer success and improves patient care. We believe in giving back to the community and offer a competitive benefits package. To learn more, visitr1rcm.com Visit us on Facebook

Posted 1 month ago

Apply

10.0 - 15.0 years

20 - 25 Lacs

Pune

Work from Office

Role Overview:As a Senior Principal Software Engineer, you will be a key technical leader responsible for shaping the design and development of scalable, reliable, and innovative AI/GenAI solutions. You will lead high priority projects, set technical direction for teams, and ensure alignment with organizational goals. Thisrole demands a high degree of technical expertise, strategic thinking, and the ability to collaborate effectively across diverse teams while mentoring and elevating others to meet a very high technical bar. Key Responsibilities: Strategic Technical Leadership : Define and drive the technical vision and roadmap for AI/GenAI systems, aligning with company objectives and future growth. Provide architectural leadership for complex, large-scale AI systems, ensuring scalability, performance, and maintainability. Act as a thought leader in AI technologies, influencing cross-functional technical decisions and long-term strategies. Advanced AI Product Development: Lead the development of state-of-the-art generative AI solutions, leveraging advanced techniques such as transformer models, diffusion models, and multi-modal architectures. Drive innovation by exploring and integrating emerging AI technologies and best practices. Mentorship & Team Growth: Mentor senior and junior engineers, fostering a culture of continuous learning and technical excellence. Elevate the team’s capabilities through coaching, training, and providing guidance on best practices and complex problem-solving. End-to-End Ownership: Take full ownership of high-impact projects, from ideation and design to implementation, deployment, and monitoring in production. Ensure the successful delivery of projects with a focus on quality, timelines, and alignment with organizational goals. Collaboration & Influence: Collaborate with cross-functional teams, including product managers, data scientists, and engineering leadership, to deliver cohesive and impactful solutions. Act as a trusted advisor to stakeholders, clearly articulating technical decisions and their business impact. Operational Excellence: Champion best practices for software development, CI/CD, and DevOps, ensuring robust and reliable systems. Monitor and improve the health of deployed services, conducting root cause analyses and driving preventive measures for long-term reliability. Innovation & Continuous Improvement: Advocate for and lead the adoption of new tools, frameworks, and methodologies to enhance team productivity and product capabilities. Stay at the forefront of AI/GenAI research, driving thought leadership and contributing to the AI community through publications or speaking engagements. Minimum Qualifications: Educational BackgroundBachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field; Ph.D. is preferred but not required. Experience10+ years of professional software development experience, including 5+ years in AI/ML or GenAI. Proven track record of designing and deploying scalable, production-grade AI solutions. Deep expertise in Python and frameworks such as TensorFlow, PyTorch, FastAPI, and LangChain. Advanced knowledge of AI/ML algorithms, generative models, and LLMs. Proficiency with cloud platforms (e.g., GCP, AWS, Azure) and modern DevOps practices. Strong understanding of distributed systems, microservices architecture, and database systems (SQL/NoSQL). Leadership Skills: Demonstrated ability to lead complex technical initiatives, influence cross functional teams, and mentor engineers at all levels. Problem-Solving Skills: Exceptional analytical and problem-solving skills, with a proven ability to navigate ambiguity and deliver impactful solutions. CollaborationExcellent communication and interpersonal skills, with the ability to engage and inspire both technical and non-technical stakeholders. Preferred Qualifications: AI/ML ExpertiseExperience with multi-modal models, reinforcement learning, and responsible AI principles. Cloud & InfrastructureAdvanced knowledge of GCP technologies such as VertexAI, BigQuery,GKE, and DataFlow. Thought LeadershipContributions to the AI/ML community through publications, open-source projects, or speaking engagements. Agile ExperienceFamiliarity with agile methodologies and working in a DevOps model. Disability Accommodation: UKGCareers@ukg.com.

Posted 1 month ago

Apply

2.0 - 7.0 years

10 - 15 Lacs

Noida

Work from Office

Company Overview With 80,000 customers across 150 countries, UKG is the largest U.S.-based private software company in the world. And we’re only getting started. Ready to bring your bold ideas and collaborative mindset to an organization that still has so much more to build and achieveRead on. Here, we know that you’re more than your work. That’s why our benefits help you thrive personally and professionally, from wellness programs and tuition reimbursement to U Choose — a customizable expense reimbursement program that can be used for more than 200+ needs that best suit you and your family, from student loan repayment, to childcare, to pet insurance. Our inclusive culture, active and engaged employee resource groups, and caring leaders value every voice and support you in doing the best work of your career. If you’re passionate about our purpose — people —then we can’t wait to support whatever gives you purpose. We’re united by purpose, inspired by you. QualificationsJD- We are looking for a talented and experienced Sr Software Engineer to join our dynamic team. This role will provide you with the opportunity to work on cutting-edge SaaS technologies and impactful projects that are used by enterprises and users worldwide. As a Software Engineer II, you will be involved in the design, development, testing, deployment, and maintenance of software solutions. You will work in a collaborative environment, contributing to the technical foundation behind our flagship products and services. We are seeking engineers with diverse specialties and skills to join our dynamic team to innovate and solve complex challenges.Our team is looking for strong talent with expertise in the following areasFront End UI Engineer (UI/UX design principles, responsive design, JavaScript frameworks) DevOps Engineer (CI/CD Pipelines, IAC proficiency, Containerization/Orchestration, Cloud Platforms) Back End Engineer (API Development, Database Management, Security Practices, Message Queuing) AI/ML Engineer (Machine Learning Frameworks, Data Processing, Algorithm Development, Big Data Technologies, Domain Knowledge)Responsibilities:Software DevelopmentWrite clean, maintainable, and efficient code or various software applications and systems. Design and ArchitectureParticipate in design reviews with peers and stakeholdersCode ReviewReview code developed by other developers, providing feedback adhering to industry standard best practices like coding guidelines TestingBuild testable software, define tests, participate in the testing process, automate tests using tools (e.g., Junit, Selenium) and Design Patterns leveraging the test automation pyramid as the guide. Debugging and TroubleshootingTriage defects or customer reported issues, debug and resolve in a timely and efficient manner. Service Health and QualityContribute to health and quality of services and incidents, promptly identifying and escalating issues. Collaborate with the team in utilizing service health indicators and telemetry for action. Assist in conducting root cause analysis and implementing measures to prevent future recurrences.Dev Ops ModelUnderstanding of working in a DevOps Model. Begin to take ownership of working with product management on requirements to design, develop, test, deploy and maintain the software in production.DocumentationProperly document new features, enhancements or fixes to the product, and also contribute to training materials.Basic QualificationsBachelor’s degree in Computer Science, Engineering, or a related technical field, or equivalent practical experience. 2+ years of professional software development experience. Proficiency in one or more programming languages such as C, C++, C#, .NET, Python, Java, or JavaScript. Experience with software development practices and design patterns. Familiarity with version control systems like Git GitHub and bug/work tracking systems like JIRA. Basic understanding of cloud technologies and DevOps principles.Strong analytical and problem-solving skills, with a proven track record of building and shipping successful software products and services. Preferred QualificationsExperience with cloud platforms like Azure, AWS, or GCP. Experience with test automation frameworks and tools. Knowledge of agile development methodologies.Commitment to continuous learning and professional development.Good communication and interpersonal skills, with the ability to work effectively in a collaborative team environment. Where we’re going UKG is on the cusp of something truly special. Worldwide, we already hold the #1 market share position for workforce management and the #2 position for human capital management. Tens of millions of frontline workers start and end their days with our software, with billions of shifts managed annually through UKG solutions today. Yet it’s our AI-powered product portfolio designed to support customers of all sizes, industries, and geographies that will propel us into an even brighter tomorrow! in the Application and Interview Process UKGCareers@ukg.com

Posted 1 month ago

Apply

5.0 - 10.0 years

10 - 14 Lacs

Pune

Work from Office

Hello Visionary! We know that the only way a business thrive is if our people are growing. Thats why we always put our people first. Our global, diverse team would be happy to support you and challenge you to grow in new ways. Who knows where our shared journey will take you We are looking for a Golang + Angular Developer. Youll make a difference by: Being proficient in Designing, developing, and maintaining robust backend services using Go, including RESTful APIs and microservices. Being proficient in Build and maintain smaller frontend applications in Angular, supporting full-stack feature delivery. Having ability to Operate, monitor, and troubleshoot existing applications to ensure performance, scalability, and reliability. Contributing to the development of complex, composite applications in a distributed system. Leading and maintaining CI/CD pipelines, ensure high code quality through Test-Driven Development (TDD). Utilizing container technologies like Docker and orchestration tools like Kubernetes (GitOps experience is a plus). Driving innovation by contributing new ideas, PoCs, or participating in internal hackathons. Youll win us over by: Holding a graduate BE / B.Tech / MCA/M.Tech/M.Sc with good academic record. 5+ Years of Experience in software development with a strong focus on Go (Golang). Working experience in building and maintaining production-grade microservices and APIs. Strong grasp of cloud platforms (AWS) including services like Lambda, ECS and S3. Hands-on experience with CI/CD, Git, and containerization (Docker). Working knowledge of Angular (intermediate or above) and full-stack technologies. Familiarity with distributed systems, message queues, and API design best practices. Having Experience with observability tools for logging, monitoring, and tracing. Passion for innovation and building quick PoCs in a startup-like environment. Personal Attributes: Excellent problem-solving and communication skills, able to articulate technical ideas clearly to stakeholders. Adaptable to fast-paced environments with a solution-oriented, startup mindset. Proactive and self-driven, with a strong sense of ownership and accountability. Actively seeks clarification and asks questions rather than waiting for instructions. Create a better #TomorrowWithUs! This role, based in Pune, is an individual contributor position. You may be required to visit other locations within India and internationally. In return, you'll have the opportunity to work with teams shaping the future. At Siemens, we are a collection of over 312,000 minds building the future, one day at a time, worldwide. We are dedicated to equality and welcome applications that reflect the diversity of the communities we serve. All employment decisions at Siemens are based on qualifications, merit, and business need. Bring your curiosity and imagination, and help us shape tomorrow Find out more about Siemens careers at

Posted 1 month ago

Apply

3.0 - 8.0 years

7 - 11 Lacs

Bengaluru

Work from Office

Hello Eager Tech Expert! To create a better future, you need to think creatively. Thats why we at Siemens need innovators who arent afraid to push boundaries to join our diverse team of tech gurus. Got what it takesThen help us build lasting, positive impact! Youll break new ground by PhD with 3+ years /Masters degree with 5+ proven experience from a reputed institute/organization. Hands-on experience with one or more of the following is a must Timeseries data analytics, feature engineering, and ML model development for industrial use cases in Process industry and/or discreet industry. In depth experience in Generative AI technologies, LLMs, capability to train/tune local LLMs, prompt engineering in industrial AI use cases In depth experience in Agentic AI frameworks and ability to implement Agentic AI solutions Expertise in Python, C++, C# languages Solid hands-on experience in training deep neural networks, recurrent networks using frameworks like Tensorflow, PyTorch, Sci-kit learn, LangChain Hands-on experience dealing with multimodal data (audio, video, text, sensors, etc.,) and suggest use cases with the best technological option for prototyping and productization. Expertise in working with databases like Influx, Mongo DB Ability to support and bring in customer/partner interest for the research group in the form of realizable pilots/projects Youre excited to build on your existing expertise, including Understanding of closed loop control in Industrial use cases Know how of time series and vector embedding spaces Optimization techniques and model compression for deployment on resource-constrained hardware/edge devices Experience building AI powered solutions for embedded platforms & compute constraint environments is a plus Knowledge of SaaS fundamentals is a plus Knowhow of solution design, architecture, software packaging using Docker/Kubernetes & deployment on cloud platforms is a plus Excited to collaborate with team members from idea generation, prototyping, present developed solutions and recommendations to business partners, and influence future technology roadmap and strategy of the portfolio. Closely follow latest developments in artificial intelligence and be an early adopter of pioneering trends/technologies Soft Skills: Technology leadership to drive and support topics in the guide path Capability to confidently address colleagues, end customers, over teams or in person meetings to demonstrate the work done effectively Very energetic and willing to walk the extra mile for achieving targets and be an active AI enthusiast within and outside Siemens. Create a better #TomorrowWithUs! Protecting the environment, conserving our natural resources, encouraging the health and performance of our people as well as safeguarding their working conditions are core to our social and business dedication at Siemens.

Posted 1 month ago

Apply

4.0 - 7.0 years

9 - 14 Lacs

Gurugram

Work from Office

About NCR Atleos Job TitleDevOps Engineer III Location: Gurgaon Job Type: Full-Time, 24*7 Availability : As a Senior DevOps Engineer, you will assist in managing and optimizing our infrastructure, ensuring the seamless deployment and operation of our applications. This role requires a proactive individual with a foundational understanding of DevOps practices, cloud technologies, automation, and Infrastructure as Code (IaC). The successful candidate will be expected to be available to work in various shifts for a 24*7 environment to address any critical issues that may arise . Key Responsibilities: Industry Knowledge: Familiar with Financial Services or Settlement Systems. IT Management Tools: Exposure to tools like ServiceNow & JIRA for incident and task management. Infrastructure: Understand IT hardware, networking, operating systems, and Active Directory. Web Servers: Configure and support Microsoft IIS or Apache Tomcat. Cloud Experience Work with cloud platforms such as Azure, AWS, or GCP. Configuration Management: Manage and maintain configuration settings for applications and infrastructure using IaC tools like Terraform, CloudFormation, or Ansible . Certificate Management: Handle the issuance, renewal, and management of SSL certificates. Databases: Proficient in SQL and database management. Scripting: Strong knowledge of PowerShell, Bash, Python, Ansible, .NET or JavaScript. Application Performance/System Monitoring: Ensure high availability and reliability by using either of tools like SolarWinds, AppDynamics, DotCom, Dynatrace, Site24/7, Grafana or NewRelic. Middleware Tools: Support middleware and automation tools like ActiveBatch, RabbitMQ, UiPath, and Tibco. Infrastructure Design: Designing and maintaining scalable, secure infrastructure. Release Management: Implement, Monitor, and manage CI/CD pipelines. Collaboration: Work with development teams for smooth application delivery. Security: Follow security best practices and address vulnerabilities. 24/7 Support: Provide on-call support for critical issues. Continuous Improvement: Enhance system performance and scalability. Knowledge Management: Document and share knowledge effectively (SharePoint & Confluence) ITIL Framework: Well versed with ITIL best practices. Mentorship Engage with team members and provide mentorship to foster growth and knowledge sharing. Engagements Participate in project planning, execution, and delivery to ensure alignment with DevOps practices. Problem Management conducting RCA and blameless postmortems. Qualifications: Bachelors degree in computer science Engineering, or a related field. 7+ years of experience in supporting a portfolio of complex business critical application systems. 5+ years of experience in DevOps or a related role. Strong experience with cloud platforms (AWS, Azure, GCP). Proficiency in scripting languages (PowerShell, Python, Bash, etc.). Experience with containerization and orchestration (Docker, Kubernetes). Knowledge of configuration management (IIS Web server or Apache Tomcat). Excellent problem-solving skills and attention to detail. Ability to operate within the ITIL framework. Strong communication and collaboration skills. Ability to work in a fast-paced, 24*7 environment. Engage with team members and provide mentorship to foster growth and knowledge sharing. EEO Statement NCR Atleos is an equal-opportunity employer. It is NCR Atleos policy to hire, train, promote, and pay associates based on their job-related qualifications, ability, and performance, without regard to race, color, creed, religion, national origin, citizenship status, sex, sexual orientation, gender identity/expression, pregnancy, marital status, age, mental or physical disability, genetic information, medical condition, military or veteran status, or any other factor protected by law. Statement to Third Party Agencies To ALL recruitment agenciesNCR Atleos only accepts resumes from agencies on the NCR Atleos preferred supplier list. Please do not forward resumes to our applicant tracking system, NCR Atleos employees, or any NCR Atleos facility. NCR Atleos is not responsible for any fees or charges associated with unsolicited resumes.

Posted 1 month ago

Apply

4.0 - 9.0 years

7 - 11 Lacs

Bengaluru

Work from Office

Educational Bachelor of Engineering Service Line Quality Responsibilities Knowhow on tools like gCTS and abapGit , ABAP Test Cockpit (ATC) Knowledge on Jenkins, Checkstyle Good understanding of cloud concepts and cloud technologies is a plus Setting up CI/CD pipelines (e.g. with Piper) and adopting CI/CD practices Knowledge on operating and maintaining service operations on K8s (Kubernetes) would be an added advantage Knowledge in SAP cloud platforms, e.g. Cloud Foundry, Gardener Experience using Hyperscalers like Google Cloud Platform, Azure or AWS Good problem-solving skills, passionate about learning new Products & Technology Knowledge in atleast 1 or more SAP functional area Experience in implementing green field S4 HANA transformation programs in Agile Knowledge / hand-on in SAP tools like SolMan, Focused Build, Focused Run Preferred Skills: Technology-SAP Functional-SAP C4HANA-SAP Commerce Cloud Technology-DevOps-Continuous Testing

Posted 1 month ago

Apply

10.0 - 15.0 years

5 - 9 Lacs

Hyderabad

Work from Office

: Design, develop, and maintain data pipelines and ETL processes using Databricks. Manage and optimize data solutions on cloud platforms such as Azure and AWS. Implement big data processing workflows using PySpark. Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver effective solutions. Ensure data quality and integrity through rigorous testing and validation. Optimize and tune big data solutions for performance and scalability. Stay updated with the latest industry trends and technologies in big data and cloud computing. : Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Proven experience as a Big Data Engineer or similar role. Strong proficiency in Databricks and cloud platforms (Azure/AWS). Expertise in PySpark and big data processing. Experience with data modeling, ETL processes, and data warehousing. Familiarity with cloud services and infrastructure. Excellent problem-solving skills and attention to detail. Strong communication and teamwork abilities. Preferred Qualifications: Experience with other big data technologies and frameworks. Knowledge of machine learning frameworks and libraries. Certification in cloud platforms or big data technologies.

Posted 1 month ago

Apply

7.0 - 12.0 years

7 - 11 Lacs

Hyderabad

Work from Office

Proven experience in managing and automating CI/CD pipelines using tools like Jenkins, Azure DevOps, or GitLab. Expertise in cloud platforms (AWS, Azure, GCP) and experience with services like compute , storage , API gateways. Strong proficiency in containerization technologies (Docker) and orchestration tools (Kubernetes,). In depth knowledge of monitoring, logging, and performance tuning using tools like . Experience managing and deploying microservices-based applications and ensuring high availability, scalability, and resilience. Familiarity with automation frameworks and scripting languages (Python, Bash, PowerShell)

Posted 1 month ago

Apply

7.0 - 12.0 years

37 - 40 Lacs

Pune

Work from Office

: Job TitleSenior Engineer, AVP LocationPune, India Role Description As a senior engineer, you will be tasked with overseeing and directly involved in creation of scalable microservices utilizing Java and Spring Boot. You will work closely with technical stakeholders to guarantee that development adheres to established architectural patterns and guidelines. You will guide the team through mentoring and coaching to help them reach their technical objectives and foster a culture of technical excellence. What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities Oversee the design, development, and implementation of microservices utilizing Java, Spring Boot, and associated technologies. Work in conjunction with product managers, architects, and DevOps to provide high-quality solutions. Uphold and advocate for best practices and standards. Facilitate code reviews, establish coding standards, and mentor junior team members. Your skills and experience Must Have: A comprehensive experience exceeding 8 years, featuring practical coding and engineering skills predominantly in Java technologies and microservices. Significant expertise in Microservices architecture, including various patterns and practices. Profound proficiency in Spring Boot, Spring Cloud, and the development of REST APIs. Desirable skills that will help you excel Previous experience in an Agile/Scrum environment. Solid understanding of containerization technologies (Docker/Kubernetes) and build tools (Maven/Gradle). Demonstrated experience with databases including Oracle, SQL, and various NoSQL databases. Familiarity with Architecture and Design Principles, Algorithms and Data Structures, as well as User Interface design. Experience with cloud platforms is advantageous (preferably GCP). Knowledge of messaging systems such as Kafka and RabbitMQ would be beneficial. Previous experience working with Python. Strong problem-solving skills. Excellent communication abilities. Proficient in GIT, Jenkins, CI/CD, Gradle, DevOps, and SRE methodologies. Prior experience in team leadership and mentoring is a plus. How well support you About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm

Posted 1 month ago

Apply

8.0 - 13.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Responsibilities Design and Develop Scalable Data PipelinesBuild and maintain robust data pipelines using Python to process, transform, and integrate large-scale data from diverse sources. Orchestration and AutomationImplement and manage workflows using orchestration tools such as Apache Airflow to ensure reliable and efficient data operations. Data Warehouse ManagementWork extensively with Snowflake to design and optimize data models, schemas, and queries for analytics and reporting. Queueing SystemsLeverage message queues like Kafka, SQS, or similar tools to enable real-time or batch data processing in distributed environments. CollaborationPartner with Data Science, Product, and Engineering teams to understand data requirements and deliver solutions that align with business objectives. Performance OptimizationOptimize the performance of data pipelines and queries to handle large scales of data efficiently. Data Governance and SecurityEnsure compliance with data governance and security standards to maintain data integrity and privacy. DocumentationCreate and maintain clear, detailed documentation for data solutions, pipelines, and workflows. Qualifications Required Skills: 5+ years of experience in data engineering roles with a focus on building scalable data solutions. Proficiency in Python for ETL, data manipulation, and scripting. Hands-on experience with Snowflake or equivalent cloud-based data warehouses. Strong knowledge of orchestration tools such as Apache Airflow or similar. Expertise in implementing and managing messaging queues like Kafka, AWS SQS, or similar. Demonstrated ability to build and optimize data pipelines at scale, processing terabytes of data. Experience in data modeling, data warehousing, and database design. Proficiency in working with cloud platforms like AWS, Azure, or GCP. Strong understanding of CI/CD pipelines for data engineering workflows. Experience working in an Agile development environment, collaborating with cross-functional teams. Preferred Skills: Familiarity with other programming languages like Scala or Java for data engineering tasks. Knowledge of containerization and orchestration technologies (Docker, Kubernetes). Experience with stream processing frameworks like Apache Flink. Experience with Apache Iceberg for data lake optimization and management. Exposure to machine learning workflows and integration with data pipelines. Soft Skills: Strong problem-solving skills with a passion for solving complex data challenges. Excellent communication and collaboration skills to work with cross-functional teams. Ability to thrive in a fast-paced, innovative environment.

Posted 1 month ago

Apply

7.0 - 12.0 years

6 - 10 Lacs

Hyderabad

Work from Office

AWS Devops Mandatory skills VMware AWS Infra Ec2 Containerizations, Devops Jenkins Kubernetes Terraform Secondary skills Python Lambda Step Functions Design and implement cloud infrastructure solutions for cloud environments. Evaluate and recommend cloud infrastructure tools and services. Manage infrastructure performance, monitoring, reliability and scalability. Technical Skills: Overall experience of 8+ years with 5+ years of Infrastucture Architecture experience Cloud Platforms Proficient in AWS along with other CSP Good understanding of cloud networking services VPC Load Balancing DNS etc Infrastructure as Code IaC Proficient with hands on experience on Terraform or AWS CloudFormation for provisioning Security Strong knowledge of cloud security fundamentals IAM security groups firewall rules Automation Familiarity proficient with hands on experience on CI CD pipelines containerization Kubernetes Docker and configuration management tools eg Chef Puppet Monitoring Performance Experience with cloud monitoring and logging tools CloudWatch Azure Monitor Stackdriver Disaster Recovery Knowledge of backup replication and recovery strategies in cloud environments Support cloud migration efforts and recommend strategies for optimization Collaborate with DevOps and security teams to integrate best practices Evaluate implement and streamline DevOps practices Supervising Examining and Handling technical operations

Posted 1 month ago

Apply

8.0 - 13.0 years

7 - 12 Lacs

Hyderabad

Work from Office

We are looking for a talented Full Stack Python Developer with a minimum of 8 years of experience. The ideal candidate will be proficient in Python, with expertise in either Flask or Django frameworks, and have a strong background in cloud technologies such as AWS or GCP. Additionally, experience with front-end technologies like Angular or React is essential. Key Responsibilities: Backend DevelopmentDesign, develop, and maintain backend systems using Python, with a focus on either Flask or Django. Cloud ServicesUtilize cloud services such as AWS (Lambda, Cloud Function) or GCP (App Engine) to deploy and manage scalable applications. Frontend DevelopmentDevelop user interfaces using Angular or React, ensuring a seamless and responsive user experience. IntegrationIntegrate frontend and backend components, ensuring smooth data flow and application functionality. Qualifications: Bachelor's or Master's degree in Computer Science or a related field. 5+ years of professional experience in full-stack Python development. Proficiency in Python and experience with either Flask or Django frameworks. Strong knowledge of cloud platforms, particularly AWS (Lambda, Cloud Function) or GCP (App Engine). Experience with frontend technologies such as Angular or React. Understanding of RESTful API design and integration. Solid understanding of version control systems, such as Git. Desired Skills: Familiarity with database systems, both SQL and NoSQL. Knowledge of containerization tools such as Docker. Experience with serverless architecture and microservices. Understanding of CI/CD pipelines. Familiarity with Agile/Scrum methodologies.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies