Jobs
Interviews

6882 Performance Tuning Jobs - Page 38

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

14.0 - 16.0 years

45 - 50 Lacs

Pune, Bengaluru

Hybrid

Work Mode: Hybrid (3 Days in Office per Week) Experience: 14+ Years Total, 7-8+ Years in Java Spring Boot We are looking for a highly experienced Senior Java Developer with strong hands-on skills in Java Spring Boot and Kafka. The candidate should be passionate about building scalable, robust applications and must be able to independently handle hands-on development work. Primary Responsibilities 80% Hands-on Development with Java Spring Boot Interact directly with clients, requiring good communication skills Build & maintain Kafka-based integrations Apply basic CI/CD & DevOps practices Contribute to architecture discussions and performance tuning Troubleshoot and optimize application performance Key Skills Required Java (Spring Boot) 7 to 8+ years of deep hands-on experience Kafka Minimum 2 to 3 years (mandatory) CI/CD and DevOps knowledge Git, Jenkins, pipelines, basic scripting Strong problem-solving and debugging skills Excellent communication for client-facing collaboration Total industry experience 14+ years How to Apply Please share your updated resume along with the following details: Current CTC: Expected CTC: Notice Period: Preferred Location (Bangalore / Pune): Willingness to work hybrid (3 days/week onsite): Yes/No Send to: navaneetha@suzva.com

Posted 3 weeks ago

Apply

14.0 - 16.0 years

7 - 11 Lacs

Pune, Bengaluru

Hybrid

Work Mode: Hybrid (3 Days in Office per Week) Experience: 14+ Years Total, 7-8+ Years in Java Spring Boot Type: Contract Position We are looking for a highly experienced Senior Java Developer with strong hands-on skills in Java Spring Boot and Kafka. The candidate should be passionate about building scalable, robust applications and must be able to independently handle hands-on development work. Primary Responsibilities 80% Hands-on Development with Java Spring Boot Interact directly with clients, requiring good communication skills Build & maintain Kafka-based integrations Apply basic CI/CD & DevOps practices Contribute to architecture discussions and performance tuning Troubleshoot and optimize application performance Key Skills Required Java (Spring Boot) 7 to 8+ years of deep hands-on experience Kafka Minimum 2 to 3 years (mandatory) CI/CD and DevOps knowledge Git, Jenkins, pipelines, basic scripting Strong problem-solving and debugging skills Excellent communication for client-facing collaboration Total industry experience 14+ years How to Apply Please share your updated resume along with the following details: Current CTC: Expected CTC: Notice Period: Preferred Location (Bangalore / Pune): Willingness to work hybrid (3 days/week onsite): Yes/No

Posted 3 weeks ago

Apply

7.0 - 10.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Notice Period : Immediate - 30 days Mandatory Skills : COBO, JCL, DB2, VSAM Job Description : We are seeking an experienced Mainframe Developer with 5 to 10 years of experience to join our dynamic team. The ideal candidate will have proficiency in COBOL, JCL, DB2, VSAM, and Agile Software Development methodologies (Scrum, Kanban, SAFe). The Mainframe Developer will be responsible for designing, developing, and maintaining mainframe applications that meet business requirements. Key Responsibilities : - Design, develop, test, and implement mainframe applications using COBOL, JCL, DB2, and VSAM. - Collaborate with cross-functional teams to gather requirements and ensure that applications meet business needs. - Perform system analysis, coding, testing, debugging, and documentation. - Optimize and enhance existing mainframe applications for performance and maintainability. - Troubleshoot and resolve production issues in a timely manner. - Participate in Agile ceremonies such as daily stand-ups, sprint planning, and retrospectives. - Contribute to continuous improvement by identifying and implementing process improvements. - Ensure code quality and adherence to coding standards and best practices. - Maintain up-to-date knowledge of industry trends and advancements in mainframe technology. Qualifications : - Bachelor's degree in Computer Science, Information Technology, or a related field. - 5 to 10 years of experience in mainframe development. - Proficient in COBOL, JCL, DB2, and VSAM. - Strong understanding of Agile Software Development methodologies (Scrum, Kanban, SAFe). - Experience with mainframe tools and utilities. - Excellent problem-solving skills and the ability to troubleshoot complex issues. - Strong communication and collaboration skills. - Ability to work independently and as part of a team. Preferred Qualifications : - Experience in financial services or a related industry. - Familiarity with DevOps practices and tools. - Knowledge of additional mainframe programming languages and technologies. Location - Bangalore,Hyderabad,Chennai,Pune,Coimbatore

Posted 3 weeks ago

Apply

7.0 - 10.0 years

5 - 9 Lacs

Coimbatore

Work from Office

Notice Period : Immediate - 30 days Mandatory Skills : COBO, JCL, DB2, VSAM Job Description : We are seeking an experienced Mainframe Developer with 5 to 10 years of experience to join our dynamic team. The ideal candidate will have proficiency in COBOL, JCL, DB2, VSAM, and Agile Software Development methodologies (Scrum, Kanban, SAFe). The Mainframe Developer will be responsible for designing, developing, and maintaining mainframe applications that meet business requirements. Key Responsibilities : - Design, develop, test, and implement mainframe applications using COBOL, JCL, DB2, and VSAM. - Collaborate with cross-functional teams to gather requirements and ensure that applications meet business needs. - Perform system analysis, coding, testing, debugging, and documentation. - Optimize and enhance existing mainframe applications for performance and maintainability. - Troubleshoot and resolve production issues in a timely manner. - Participate in Agile ceremonies such as daily stand-ups, sprint planning, and retrospectives. - Contribute to continuous improvement by identifying and implementing process improvements. - Ensure code quality and adherence to coding standards and best practices. - Maintain up-to-date knowledge of industry trends and advancements in mainframe technology. Qualifications : - Bachelor's degree in Computer Science, Information Technology, or a related field. - 5 to 10 years of experience in mainframe development. - Proficient in COBOL, JCL, DB2, and VSAM. - Strong understanding of Agile Software Development methodologies (Scrum, Kanban, SAFe). - Experience with mainframe tools and utilities. - Excellent problem-solving skills and the ability to troubleshoot complex issues. - Strong communication and collaboration skills. - Ability to work independently and as part of a team. Preferred Qualifications : - Experience in financial services or a related industry. - Familiarity with DevOps practices and tools. - Knowledge of additional mainframe programming languages and technologies.

Posted 3 weeks ago

Apply

5.0 - 8.0 years

5 - 15 Lacs

Hyderabad, Bengaluru

Work from Office

We are seeking a Full-Stack SAP Developer proficient in SAP ABAP, Fiori, and OData Services to build custom solutions and services. Key Responsibilities: • Develop applications using SAP ABAP and OData • Design responsive Fiori apps with SAP UI5 • Integrate backend services using RESTful APIs • Review code and follow best practices Required Skills: • 5-8 years of experience with ABAP, Fiori, and OData • Expertise in UI5 app development • Strong debugging and performance optimization skills

Posted 3 weeks ago

Apply

12.0 - 22.0 years

2 - 4 Lacs

Chennai

Work from Office

SUMMARY This is a remote position. Key Responsibilities Design, develop, and maintain integration workflows using Jitterbit Harmony for data exchange between SAP (ECC or S/4HANA) and other third-party systems. Collaborate with functional and technical teams to gather integration requirements. Build robust, scalable, and reusable Jitterbit integrations to interface with SAP modules like FI/CO, MM, SD, or SuccessFactors. Configure APIs, endpoints, and data transformation rules within Jitterbit. Troubleshoot integration failures and ensure error-handling, logging, and alerting are in place. Perform system and unit testing of integration workflows and support UAT with business stakeholders. Ensure integration best practices, data security, and compliance in all solutions. Create and maintain technical documentation for integrations and mappings. Required Skills & Experience 5+ years hands-on experience with Jitterbit Harmony (Cloud Studio, Design Studio). Solid experience integrating with SAP systems (IDocs, BAPIs, RFCs, SAP PI/PO, or APIs). Strong understanding of SAP data structures and business processes. Proficiency with REST/SOAP APIs, JSON, XML, and EDI formats. Experience with data transformation, error handling, and scheduling integrations. Knowledge of middleware principles, integration patterns, and API-led architecture. Strong debugging and performance tuning skills. Experience in cloud-based platforms (AWS/Azure/GCP) is a plus. Nice to Have Exposure to other integration tools like MuleSoft, Dell Boomi, or SAP CPI. Knowledge of Salesforce, ServiceNow, or Workday integrations. Familiarity with Agile/Scrum methodology.

Posted 3 weeks ago

Apply

6.0 - 10.0 years

4 - 8 Lacs

Hyderabad

Work from Office

We are seeking an experienced SAP Performance Engineer to join our team. The ideal candidate will have a strong background in performance engineering and testing, with deep expertise in monitoring tools, JVM tuning, SQL optimization, and profiling techniques. The candidate should be able to proactively analyze and resolve system performance issues across SAP landscapes and work closely with development and infrastructure teams to ensure optimal system performance. - Design, execute, and analyze performance tests across SAP modules. - Use monitoring tools and profiling techniques (e.g., JProfiler) to identify system bottlenecks. - Perform JVM tuning, including analysis of thread dumps, heap dumps, and stack traces. - Optimize and run complex SQL queries; analyze and improve indexing strategies. - Collaborate with cross-functional teams to provide performance solutions and recommendations. - Work on performance engineering across both application and infrastructure levels. - Develop scripts or tools (using Python, Perl, Java, C++, or ABAP) to automate and improve performance testing and monitoring. - Monitor SAP system health and provide performance tuning and root cause analysis. - Minimum 5 years of experience in Performance Engineering, preferably within SAP environments. - Hands-on experience with performance testing tools and monitoring tools. - Proficiency in one or more programming/scripting languages: Python, Perl, Java, C++, or ABAP. - Strong experience in SQL query tuning and working with indexes. - Solid knowledge of JVM tuning, including thread dump, heap dump, and memory management. - Experience using profiling tools such as JProfiler. - Strong problem-solving and analytical skills.

Posted 3 weeks ago

Apply

7.0 - 10.0 years

5 - 9 Lacs

Hyderabad, Chennai, Coimbatore

Work from Office

Notice Period : Immediate - 30 days Mandatory Skills : COBO, JCL, DB2, VSAM Job Description : We are seeking an experienced Mainframe Developer with 5 to 10 years of experience to join our dynamic team. The ideal candidate will have proficiency in COBOL, JCL, DB2, VSAM, and Agile Software Development methodologies (Scrum, Kanban, SAFe). The Mainframe Developer will be responsible for designing, developing, and maintaining mainframe applications that meet business requirements. Key Responsibilities : - Design, develop, test, and implement mainframe applications using COBOL, JCL, DB2, and VSAM. - Collaborate with cross-functional teams to gather requirements and ensure that applications meet business needs. - Perform system analysis, coding, testing, debugging, and documentation. - Optimize and enhance existing mainframe applications for performance and maintainability. - Troubleshoot and resolve production issues in a timely manner. - Participate in Agile ceremonies such as daily stand-ups, sprint planning, and retrospectives. - Contribute to continuous improvement by identifying and implementing process improvements. - Ensure code quality and adherence to coding standards and best practices. - Maintain up-to-date knowledge of industry trends and advancements in mainframe technology. Qualifications : - Bachelor's degree in Computer Science, Information Technology, or a related field. - 5 to 10 years of experience in mainframe development. - Proficient in COBOL, JCL, DB2, and VSAM. - Strong understanding of Agile Software Development methodologies (Scrum, Kanban, SAFe). - Experience with mainframe tools and utilities. - Excellent problem-solving skills and the ability to troubleshoot complex issues. - Strong communication and collaboration skills. - Ability to work independently and as part of a team. Preferred Qualifications : - Experience in financial services or a related industry. - Familiarity with DevOps practices and tools. - Knowledge of additional mainframe programming languages and technologies. Location : Hyderabad, Bangalore, Chennai, Greater Noida, Coimbatore

Posted 3 weeks ago

Apply

7.0 - 10.0 years

5 - 9 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Location : Hyderabad, Bangalore, Chennai, Greater Noida, Coimbatore Notice Period : Immediate - 30 days Mandatory Skills : COBO, JCL, DB2, VSAM Job Description : We are seeking an experienced Mainframe Developer with 5 to 10 years of experience to join our dynamic team. The ideal candidate will have proficiency in COBOL, JCL, DB2, VSAM, and Agile Software Development methodologies (Scrum, Kanban, SAFe). The Mainframe Developer will be responsible for designing, developing, and maintaining mainframe applications that meet business requirements. Key Responsibilities : - Design, develop, test, and implement mainframe applications using COBOL, JCL, DB2, and VSAM. - Collaborate with cross-functional teams to gather requirements and ensure that applications meet business needs. - Perform system analysis, coding, testing, debugging, and documentation. - Optimize and enhance existing mainframe applications for performance and maintainability. - Troubleshoot and resolve production issues in a timely manner. - Participate in Agile ceremonies such as daily stand-ups, sprint planning, and retrospectives. - Contribute to continuous improvement by identifying and implementing process improvements. - Ensure code quality and adherence to coding standards and best practices. - Maintain up-to-date knowledge of industry trends and advancements in mainframe technology. Qualifications : - Bachelor's degree in Computer Science, Information Technology, or a related field. - 5 to 10 years of experience in mainframe development. - Proficient in COBOL, JCL, DB2, and VSAM. - Strong understanding of Agile Software Development methodologies (Scrum, Kanban, SAFe). - Experience with mainframe tools and utilities. - Excellent problem-solving skills and the ability to troubleshoot complex issues. - Strong communication and collaboration skills. - Ability to work independently and as part of a team. Preferred Qualifications : - Experience in financial services or a related industry. - Familiarity with DevOps practices and tools. - Knowledge of additional mainframe programming languages and technologies.

Posted 3 weeks ago

Apply

7.0 - 10.0 years

3 - 7 Lacs

Hyderabad

Work from Office

NP : Immediate - 30 days only Job Overview : We are seeking a highly skilled Performance Test Architect with 5-8 years of industry and domain expertise. The ideal candidate will have hands-on experience with performance testing tools, monitoring tools, automation tools, Java profiling, and technical analysis. This role requires operational exposure to DevOps and CI/CD, strong Java coding and scripting knowledge, and an engineering degree with a solid academic track record. Key Responsibilities : Performance Testing : - Conduct performance testing using tools such as Load Runner, JMeter, and Rational Integration Tester. - Utilize monitoring tools like Splunk, Dynatrace, and Kibana to analyze system performance. Automation : - Implement automation using Jenkins and other native automation tools. - Develop and maintain automation scripts to enhance testing efficiency. Java Profiling & Technical Analysis : - Perform Java profiling to identify and troubleshoot performance bottlenecks. - Conduct technical analysis to ensure the stability and scalability of applications. DevOps & CI/CD : - Collaborate with DevOps teams to integrate performance testing into CI/CD pipelines. - Ensure continuous delivery and deployment of high-quality software. Java Coding & Scripting : - Write and maintain Java code and scripts to support performance testing and automation efforts. Communication : - Communicate effectively in written and spoken English. - Document test plans, test cases, and test results. - Present performance analysis findings to stakeholders. Qualifications : Education : - Engineering degree with a good academic track record. Experience : - 5-8 years of industry and domain expertise in performance testing and analysis. Technical Skills : - Proficiency in performance testing tools (Load Runner, JMeter, Rational Integration Tester). - Hands-on experience with monitoring tools (Splunk, Dynatrace, Kibana). - Experience with automation tools, especially Jenkins and native automation tools. - Expertise in Java profiling and technical analysis. - Strong Java coding and scripting knowledge. DevOps & CI/CD : - Operational exposure to DevOps practices and CI/CD pipelines. Communication : - Excellent proficiency in written and spoken English. Desired Attributes : - Strong analytical and problem-solving skills. - Ability to work collaboratively in a team environment. - Attention to detail and commitment to delivering high-quality work. - Proactive attitude and willingness to learn and adapt to new technologies.

Posted 3 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

chennai, tamil nadu

On-site

You will be working full-time from the office in Chennai as an Oracle DBA Admin. With 6 to 8 years of relevant experience, you will be responsible for managing and supporting Oracle 19c in production RAC (ASM) environments. Your duties will include performing Exadata migration, handling database performance tuning, utilizing Oracle Data Guard, RMAN, and Data Pump for backup and recovery tasks, administering ASM storage environments, ensuring database availability, and providing 24x7 support. Additionally, you will be required to create and maintain SQL/PLSQL queries, design and implement backup and recovery strategies, perform performance testing, and document database policies, procedures, and standards. Collaboration with the application team, support for developers, and working across teams to ensure database reliability and optimization are also key aspects of the role. The ideal candidate will possess strong hands-on experience with Oracle 11gR2/19c in RAC (ASM) environments, proficiency in Data Guard, RMAN, and Data Pump, experience with Linux OS for Oracle database administration, a solid understanding of Exadata architecture, strong SQL/PLSQL skills, good knowledge of performance tuning and troubleshooting, as well as excellent communication and collaboration skills. Being self-motivated, detail-oriented, and capable of working independently or as part of a team are essential qualities for this position. In order to be considered for this role, you must hold certifications as an Oracle Certified DB Professional, as well as have expertise in Data Guard, RMAN (Recovery Manager), and Data Pump. The hiring process for this position includes screening through an HR Round, followed by Technical Round 1, Technical Round 2, and concluding with a Final HR Round.,

Posted 3 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

delhi

On-site

As a frontend developer at Nomiso India, you will be responsible for building a workflow automation system to simplify existing manual processes. Your role will involve owning lifecycle management, automating platform operations, leading issue resolution, defining compliance standards, integrating various tools, driving observability and performance tuning initiatives, as well as mentoring team members and leading operational best practices. You can expect a stimulating and fun work environment at Nomiso, where innovation and thought leadership are highly valued. We provide opportunities for career growth, idea generation, and innovation at all levels of the company. As a part of our team, you will be encouraged to push your boundaries and fulfill your career aspirations. The core tools and technology stack you will be working with include OpenShift, Kubernetes, GitOps, Ansible, Terraform, Prometheus, Grafana, EFK Stack, Vault, SCCs, RBAC, NetworkPolicies, and more. To qualify for this role, you should have a BE/B.Tech or equivalent degree in Computer Science or a related field. The position is based in Delhi-NCR. Join us at Nomiso India and be a part of a dynamic team that thrives on ideas, innovation, and challenges. Your contributions will be valued, and you will have the opportunity to grow professionally in a fast-paced and exciting environment. Let's work together to simplify complex business problems and empower our customers with effective solutions.,

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Data Engineer, you will be responsible for designing, building, and maintaining scalable ETL pipelines using Java and SQL-based frameworks. Your role involves extracting data from various structured and unstructured sources, transforming it into formats suitable for analytics and reporting, and collaborating with data scientists, analysts, and business stakeholders to gather data requirements and optimize data delivery. Additionally, you will develop and maintain data models, databases, and data integration solutions, while monitoring data pipelines and troubleshooting data issues to ensure data quality and integrity. Your expertise in Java for backend/ETL development and proficiency in SQL for data manipulation, querying, and performance tuning will be crucial in this role. You should have hands-on experience with ETL tools such as Apache NiFi, Talend, Informatica, or custom-built ETL pipelines, along with familiarity with relational databases like PostgreSQL, MySQL, Oracle, and data warehousing concepts. Experience with version control systems like Git is also required. Furthermore, you will be responsible for optimizing data flow and pipeline architecture for performance and scalability, documenting data flow diagrams, ETL processes, and technical specifications, and ensuring adherence to security, governance, and compliance standards related to data. To qualify for this position, you should hold a Bachelor's degree in computer science, Information Systems, Engineering, or a related field, along with at least 5 years of professional experience as a Data Engineer or in a similar role. Your strong technical skills and practical experience in data engineering will be essential in successfully fulfilling the responsibilities of this role.,

Posted 3 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

chennai, tamil nadu

On-site

We are currently seeking candidates for the role of Dotnet Fullstack Developers with React / Angular for our Client WIPRO in Chennai. The selected candidates must be available for a face to face discussion on Saturday, 19th July. As a Full Stack Developer, you will be responsible for: - Having a minimum of 6 years of experience in C#, .Net Core, and SQL development. - Demonstrating hands-on experience in React, .Net Core, C#, Web API, and Entity Framework Core. - Possessing a strong understanding of object-oriented programming. - Showcasing expertise in designing, coding, debugging, technical problem-solving, prototyping, performance tuning, and unit testing. - Having experience in the full lifecycle software development process and methods. - Exhibiting strong communication skills, problem-solving abilities, and analytical skills. Candidates with a maximum notice period of 30 days will be considered for this position. The mode of work is onsite in Chennai. If you meet the above qualifications and are interested in this opportunity, please make sure to be available for the face to face discussion on 19th July. Thank you, Talent Acquisition Team SK Kishan Babu,

Posted 3 weeks ago

Apply

10.0 - 14.0 years

0 Lacs

karnataka

On-site

As a Senior Software Engineer at Vonage, you will play a crucial role in building the infrastructure and business logic for cutting-edge unified communication platforms. Your responsibilities will include independently designing, developing, testing, and documenting moderately complex software systems and applications. Collaborating with analysts, peers, and stakeholders, you will work on highly complex systems to ensure alignment with specified functional and business requirements. You will be expected to estimate software development tasks, handle multiple priorities in a DevOps environment, and prioritize work based on team and business needs. Writing end-to-end automated tests for business-critical components will be essential to ensure quality, performance, and adherence to established specifications. Additionally, you will have the opportunity to innovate disruptive technologies, communicate your ideas effectively, and collaborate with experts to bring your vision to life. To excel in this role, you should possess a desire to work within a talented and motivated team of engineers. Proficiency in Vue.js is a must, along with experience in Angular, Web Components, PWA, Web Extensions, and Service Workers. You should be able to apply analytical skills to evaluate complex technical problems, confidently see projects through completion, and work effectively with both technical and non-technical individuals. Independent problem-solving, leadership skills, and the ability to communicate technically are crucial for success. The qualifications required for this position include a Bachelor's degree in Computer Science, Electrical Engineering, or a related field, along with over 10 years of experience in Software Engineering focusing on building front-end web applications using data-binding frameworks. Professional development experience in modern JS frameworks, RESTful APIs, microservices technologies, and cloud environments is essential. Knowledge of performance tuning, best practices for the software development life cycle, and troubleshooting in a full-stack environment are also necessary. Preferred qualifications include experience in Agile software development methodology, familiarity with AWS cloud deployment and maintenance tools, and knowledge of CRMs like Salesforce, Clio, and Zendesk. Proficiency in high-level programming languages such as JavaScript, Ruby, or Python, as well as familiarity with additional technologies like Angular, Web Components, PWA, Web Extensions, and Service Workers, would be advantageous. Staying up-to-date with the latest technologies and demonstrating excellent technical communication skills will further enhance your contributions to the team.,

Posted 3 weeks ago

Apply

5.0 - 9.0 years

9 - 15 Lacs

Chennai

Hybrid

Greetings from Aspire Systems!! Currently Hiring for the below Skillset. Job Title : Senior Oracle SQL & PL/SQL Developer Experience : 5 to 8 Years Location : Chennai/ Bangalore/ Kochi Notice : Immediate to 20 days Share CV to safoora.imthiyas@aspiresys.com JD: 5 8 years of hands-on experience in Oracle SQL and PL/SQL development. Strong understanding of relational database concepts and performance tuning. Experience with job scheduling tools. Familiarity with SQL*Loader and data integration tools. Exposure to enterprise environments in domains (Insurance is preferred) Excellent problem-solving and communication skills. Experience working in Agile/Scrum environments. Knowledge of Azure DevOps and Mainframe Knowledge of version control systems (e.g., Git) and CI/CD pipelines. Strong analytical and critical thinking abilities. Effective communication and collaboration with global teams. Adaptability and eagerness to learn new technologies.

Posted 3 weeks ago

Apply

5.0 - 7.0 years

6 - 10 Lacs

Hyderabad

Work from Office

Career Category Information Systems Job Description Site Reliability Engineer -II ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world s toughest diseases, and make people s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what s known today. ABOUT THE ROLE Let s do this. Let s change the world. We are looking for a Site Reliability Engineer/Cloud Engineer (SRE2) to work on the performance optimization, standardization, and automation of Amgen s critical infrastructure and systems. This role is crucial to ensuring the reliability, scalability, and cost-effectiveness of our production systems. The ideal candidate will work on operational excellence through automation, incident response, and proactive performance tuning, while also reducing infrastructure costs. You will work closely with cross-functional teams to establish best practices for service availability, efficiency, and cost control. Roles & Responsibilities: System Reliability, Performance Optimization & Cost Reduction: Ensure the reliability, scalability, and performance of Amgen s infrastructure, platforms, and applications. Proactively identify and resolve performance bottlenecks and implement long-term fixes. Continuously evaluate system design and usage to identify opportunities for cost optimization, ensuring infrastructure efficiency without compromising reliability. Automation & Infrastructure as Code (IaC): Drive the adoption of automation and Infrastructure as Code (IaC) across the organization to streamline operations, minimize manual interventions, and enhance scalability. Implement tools and frameworks (such as Terraform, Ansible, or Kubernetes) that increase efficiency and reduce infrastructure costs through optimized resource utilization. Standardization of Processes & Tools: Establish standardized operational processes, tools, and frameworks across Amgen s technology stack to ensure consistency, maintainability, and best-in-class reliability practices. Champion the use of industry standards to optimize performance and increase operational efficiency. Monitoring, Incident Management & Continuous Improvement: Implement and maintain comprehensive monitoring, alerting, and logging systems to detect issues early and ensure rapid incident response. Lead the incident management process to minimize downtime, conduct root cause analysis, and implement preventive measures to avoid future occurrences. Foster a culture of continuous improvement by leveraging data from incidents and performance monitoring. Collaboration & Cross-Functional Leadership: Partner with software engineering, and IT teams to integrate reliability, performance optimization, and cost-saving strategies throughout the development lifecycle. Act as a SME for SRE principles and advocate for best practices for assigned Projects. Capacity Planning & Disaster Recovery: Execute capacity planning processes to support future growth, performance, and cost management. Maintain disaster recovery strategies to ensure system reliability and minimize downtime in the event of failures. Must-Have Skills: Experienced with AWS/Azure Cloud Services Proficient in CI/CD (Jenkins/Gitlab), Observability, IAC, Gitops etc Experience with containerization (Docker) and orchestration tools (Kubernetes) to optimize resource usage and improve scalability. Ability to learn new technologies quickly. Strong problem-solving and analytical skills. Excellent communication and teamwork skills. Good-to-Have Skills: Knowledge of cloud-native technologies and strategies for cost optimization in multi-cloud environments. Familiarity with distributed systems, databases, and large-scale system architectures. Bachelor s degree in computer science and engineering preferred, other Engineering field is considered Databricks Knowledge/Exposure is good to have (need to upskill if hired) Soft Skills: Ability to foster a collaborative and innovative work environment. Strong problem-solving abilities and attention to detail. High degree of initiative and self-motivation. Basic Qualifications: Bachelor s degree in Computer Science, Engineering, or related field. 5-7 years of experience in IT infrastructure, with at least 4+ years in Site Reliability Engineering or related fields.

Posted 3 weeks ago

Apply

4.0 - 6.0 years

10 - 14 Lacs

Hyderabad

Work from Office

In this role, you will design, build and maintain data lake solutions for scientific data that drive business decisions for Research. You will build scalable and high-performance data engineering solutions for large scientific datasets and collaborate with Research stakeholders. The ideal candidate possesses experience in the pharmaceutical or biotech industr y , demonstrates strong technical skills, is proficient with big data technologies, and has a deep understanding of data architecture and ETL processes. Roles & Responsibilities: D esign, develop, and implement data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manag e scope, timelines, and risks Develop and maintain data models for biopharma scientific data , data dictionaries, and other documentation to ensure data accuracy and consistency Optimize large datasets for query performance Collaborate with global cross-functional teams including research scientists to understand data requirements and design solutions that meet business needs Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate with Data Architects, Business SMEs , Software Engineers and Data Scientists to design and develop end-to-end data pipeline s to meet fast paced business need s across geographic regions Identify and resolve [ complex ] data-related challenges Adhere to best practices for coding, testing , and designing reusable code/component E xplore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Maintain comprehensive documentation of processes, systems, and solutions Basic Qualifications and Experience: Doctorate Degree OR Master s degree with 4 - 6 years of experience in Computer Science, IT , Computational Chemistry, Computational Biology/ Bioinformatics or related field OR Bachelor s degree with 6 - 8 years of experience in Computer Science, IT , Computational Chemistry, Computational Biology/ Bioinformatics or related field OR Diploma with 10 - 12 years of experience in Computer Science, IT , Computational Chemistry, Computational Biology/ Bioinformatics or related field Preferred Qualifications and Experience: 3+ years of experience in implementing and supporting biopharma scientific research data analytics (software platforms) Functional Skills: Must-Have Skills: Proficiency in SQL and Python for data engineering, test automation frameworks ( pytest ), and scripting tasks Hands on experience with big data technologies and platforms , such as Databricks, Apache Spark ( PySpark , SparkSQL ) , workflow orchestration, performance tuning on big data processing Excellent problem-solving skills and the ability to work with large, complex datasets Good-to-Have Skills: A passion for tackling complex challenges in drug discovery with technology and data Strong understanding of data modeling, data warehousing, and data integration concepts Strong experience using RDBMS ( e.g. Oracle, MySQL , SQL server , Postgre SQL ) Knowledge of cloud data platforms (AWS preferred) E xperience with data visualization tools (e . g. Dash, Plotly , Spotfire ) Experience with diagramming and collaboration tools such as Miro, Lucidchart or similar tools for process mapping and brainstorming Experience writing and maintaining technical documentation in Confluence U nderstanding of data governance frameworks, tools, and best practices Professional Certifications: Databricks Certified Data Engineer Professional preferred Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills EQUAL OPPORTUNITY STATEMENT We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation . Apply now for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. .

Posted 3 weeks ago

Apply

10.0 - 14.0 years

12 - 16 Lacs

Hyderabad

Work from Office

We are seeking a Sr Manager Data Sciences Amgen s most senior individual-contributor authority on building and scaling end-to-end machine-learning and generative-AI platforms. Sitting at the intersection of engineering excellence and data-science enablement, you will design the core services, infrastructure and governance controls that allow hundreds of practitioners to prototype, deploy and monitor models classical ML, deep learning and LLMs securely and cost-effectively. Acting as a player-coach, you will establish platform strategy, define technical standards, and partner with DevOps, Security, Compliance and Product teams to deliver a frictionless, enterprise-grade AI developer experience. Roles & Responsibilities: Develop and execute a multi-year data-science strategy and roadmap that directly supports corporate objectives, translating it into measurable quarterly OKRs for the team. Lead, mentor and grow a high-performing staff of data scientists and ML engineers, providing technical direction, career development, and continuous learning opportunities. Own the end-to-end delivery of advanced analytics and machine-learning solutions from problem framing and data acquisition through model deployment, monitoring and iterative improvement ensuring each project delivers clear business value. Prioritise and manage a balanced portfolio of initiatives, applying ROI, risk and resource-capacity criteria to allocate effort effectively across research, clinical, manufacturing and commercial domains. Provide hands-on guidance on algorithm selection and experimentation (regression, classification, clustering, time-series, deep learning, generative-AI, causal inference), ensuring methodological rigour and reproducibility. Establish and enforce best practices for code quality, version control, MLOps pipelines, model governance and responsible-AI safeguards (privacy, fairness, explainability). Partner with Data Engineering, Product, IT Security and Business stakeholders to integrate models into production systems via robust APIs, dashboards or workflow automations with well-defined SLAs. Manage cloud and on-prem analytics environments, optimising performance, reliability and cost; negotiate vendor contracts and influence platform roadmaps where appropriate. Champion a data-driven culture by communicating insights and model performance to VP/SVP-level leaders through clear storytelling, visualisations and actionable recommendations. Track emerging techniques, regulatory trends and tooling in AI/ML; pilot innovations that keep the organisation at the forefront of data-science practice and compliance requirements. Must-Have Skills: Leadership & Delivery: 10+ years in advanced analytics with 4+ years managing high-performing data-science or ML teams, steering projects from problem framing through production. Algorithmic Expertise: Deep command of classical ML, time-series, deep-learning (CNNs, transformers) and causal-inference techniques, with sound judgement on when and how to apply each. Production Engineering: Expert Python and strong SQL, plus hands-on experience deploying models via modern MLOps stacks (MLflow, Kubeflow, SageMaker, Vertex AI or Azure ML) with automated monitoring and retraining. Business Influence: Proven ability to translate complex analytics into concise, outcome-oriented narratives that inform VP/SVP-level decisions and secure investment. Cloud & Cost Governance: Working knowledge of AWS, Azure or GCP, including performance tuning and cost-optimisation for large-scale data and GPU/CPU workloads. Responsible AI & Compliance: Familiarity with privacy, security and AI-governance frameworks (GDPR, HIPAA, GxP, EU AI Act) and a track record embedding fairness, explainability and audit controls throughout the model lifecycle. Good-to-Have Skills: Experience in Biotechnology or pharma industry is a big plus Published thought-leadership or conference talks on enterprise GenAI adoption. Master s degree in Computer Science and or Data Science Familiarity with Agile methodologies and Scaled Agile Framework (SAFe) for project delivery. Education and Professional Certifications Master s degree with 10-14 + years of experience in Computer Science, IT or related field OR Bachelor s degree with 12-17 + years of experience in Computer Science, IT or related field Certifications on GenAI/ML platforms (AWS AI, Azure AI Engineer, Google Cloud ML, etc.) are a plus. Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT .

Posted 3 weeks ago

Apply

5.0 - 10.0 years

7 - 11 Lacs

Hyderabad

Work from Office

The role is responsible for performance monitoring, maintenance, and reliable operation of BI Platforms, BI servers and database. This role involves managing BI Servers and User Admin Management for different environments, ensuring data is stored and retrieved efficiently, and safeguarding sensitive information and ensuring the uptime, performance, and security of IT infrastructure & Software maintenance. We are seeking a skilled BI Platform Administrator to manage, maintain, and optimize our enterprise Power BI, Spotfire and Tableau platforms . The ideal candidate will ensure seamless performance, governance, user access, platform upgrades, troubleshooting, and best practices across our BI environments. Roles & Responsibilities: Administer and maintain Spotfire, Power BI Service, Power BI Report Server, and Tableau Server/Online/any Cloud platforms (AWS, Azure/GCP). Preferred AWS Cloud experience. Configure, monitor, and optimize performance, capacity, and availability of BI platforms. Set up and manage user roles, permissions, and security policies. Manage BI platform upgrades, patches, and migrations. Monitor scheduled data refreshes and troubleshoot failures. Implement governance frameworks to ensure compliance with data policies. Collaborate with BI developers, data engineers, and business users for efficient platform usage. Automate routine administrative tasks using scripts (PowerShell, Python, etc.). Create and maintain documentation of configurations and operational procedures. Install, configure, and maintain BI tools on different operating systems, servers, and applications to ensure their reliability and performance Monitor Platform performance and uptime, addressing any issues that arise promptly to prevent service interruptions Implement and maintain security measures to protect Platforms from unauthorized access, vulnerabilities, and other threats Manage backup procedures and ensure data is securely backed up and recoverable in case of system failures Provide technical support to users, troubleshooting and resolving issues related to system access, performance, and software Apply operating system updates, patches, and configuration changes as necessary Maintain detailed documentation of Platform configurations, procedures, and change management Work closely with network administrators, database administrators, and other IT professionals to ensure that Platforms are integrated and functioning optimally Install, configure, and maintain database management Platforms (BI), ensuring services are reliable and perform optimally Monitor and optimize database performance, including query tuning, indexing, and resource allocation Maintain detailed documentation of Platform configurations, procedures, and policies Work closely with developers, Date Engineers, system administrators, and other IT staff to support database-related needs and ensure optimal platform performance Basic Qualifications and Experience: Over all 5+ years of experience in maintaining Administration on BI Platforms is preferred. 3+ years of experience administering Power BI Service and/or Power BI Report Server or 3+ years of experience administering Spotfire 2+ years of experience administering Tableau Server or Tableau Cloud . Strong knowledge of Active Directory , SSO/SAML , and Role-Based Access Control (RBAC). Experience with platform monitoring and troubleshooting (Power BI Gateway logs, Tableau logs, etc.). Experience with Spotfire webservers, caching, or application server architecture Scripting experience (e.g., PowerShell , DAX , or Python ) for automation and monitoring. Strong understanding of data governance , row-level security , and compliance practices. Experience working with enterprise data sources (SQL Server, Snowflake, Oracle, etc.). Familiarity with capacity planning , load balancing , and scaling strategies for BI tools. Functional Skills: Should Have: Knowledge of Power BI Premium Capacity Management ,Tableau Resource Management, or Spotfire Caching and application server. Experience interacting directly with end users Experience integrating BI platforms with CI/CD pipelines and DevOps tools. Hands-on experience in user adoption tracking , audit logging, and license management. Ability to conduct health checks and implement performance tuning recommendations. Understanding of multi-tenant environments or large-scale deployments . Good to Have: Experience with Power BI REST API or Tableau REST API for automation. Familiarity with AWS Services and/or AWS equivalents. Background in data visualization or report development for better user collaboration. Exposure to other BI tools (e.g., Looker, Qlik, MicroStrategy). Knowledge of ITIL practices or experience working in a ticket-based support environment. Experience in a regulated industry (finance, healthcare, etc.) with strong compliance requirements. Education & Experience : Master s degree with 3-7+ years of experience in Business, Engineering, IT or related field OR Bachelor s degree with 5-9 years of experience in Business, Engineering, IT or related field OR Soft Skills: Excellent analytical and troubleshooting skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Shift Information: This position requires you to work a later shift and may be assigned a second or third shift schedule. Candidates must be willing and able to work during evening or night shifts, as required based on business requirements. EQUAL OPPORTUNITY STATEMENT Ready to Apply for the Job? We highly recommend utilizing Workdays robust Career Profile feature to complete the application process. A link to update your profile is available when you click Apply . You can then complete your Workday profile in minutes with the Upload My Experience functionality to upload an updated copy of your resume or you can simply edit the individual sections of your Career Profile. Please note that you should be in your current position for at least 18 months before applying to internal positions. Staff must notify their current manager if invited for an interview. In addition, Staff are ineligible to apply for open positions if (a) their performance is currently being managed on a performance improvement plan (PIP) or other locally utilized formal coaching document or (b) their most recent performance rating was not a Partially Meets Expectations or higher. Please visit our Internal Transfer Guidelines for more detailed information .

Posted 3 weeks ago

Apply

3.0 - 4.0 years

40 - 45 Lacs

Hyderabad

Work from Office

Let s do this. Let s change the world. We are looking for highly motivated expert Senior Data Engineer who can own the design & development of complex data pipelines, solutions and frameworks. The ideal candidate will be responsible to design, develop, and optimize data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. This role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Design, develop, and maintain scalable ETL/ELT pipelines to support structured, semi-structured, and unstructured data processing across the Enterprise Data Fabric. Implement real-time and batch data processing solutions, integrating data from multiple sources into a unified, governed data fabric architecture. Optimize big data processing frameworks using Apache Spark, Hadoop, or similar distributed computing technologies to ensure high availability and cost efficiency. Work with metadata management and data lineage tracking tools to enable enterprise-wide data discovery and governance. Ensure data security, compliance, and role-based access control (RBAC) across data environments. Optimize query performance, indexing strategies, partitioning, and caching for large-scale data sets. Develop CI/CD pipelines for automated data pipeline deployments, version control, and monitoring. Implement data virtualization techniques to provide seamless access to data across multiple storage systems. Collaborate with cross-functional teams, including data architects, business analysts, and DevOps teams, to align data engineering strategies with enterprise goals. Stay up to date with emerging data technologies and best practices, ensuring continuous improvement of Enterprise Data Fabric architectures. Must-Have Skills: Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies. Proficiency in workflow orchestration, performance tuning on big data processing. Strong understanding of AWS services Experience with Data Fabric, Data Mesh, or similar enterprise-wide data architectures. Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Good to have deep expertise in Biotech & Pharma industries Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications Master s degree and 3 to 4 + years of Computer Science, IT or related field experience OR Bachelor s degree and 5 to 8 + years of Computer Science, IT or related field experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills.

Posted 3 weeks ago

Apply

9.0 - 12.0 years

40 - 45 Lacs

Hyderabad

Work from Office

Let s do this. Let s change the world. We are looking for highly motivated expert Senior Data Engineer who can own the design & development of complex data pipelines, solutions and frameworks. The ideal candidate will be responsible to design, develop, and optimize data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. This role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Design, develop, and maintain scalable ETL/ELT pipelines to support structured, semi-structured, and unstructured data processing across the Enterprise Data Fabric. Implement real-time and batch data processing solutions, integrating data from multiple sources into a unified, governed data fabric architecture. Optimize big data processing frameworks using Apache Spark, Hadoop, or similar distributed computing technologies to ensure high availability and cost efficiency. Work with metadata management and data lineage tracking tools to enable enterprise-wide data discovery and governance. Ensure data security, compliance, and role-based access control (RBAC) across data environments. Optimize query performance, indexing strategies, partitioning, and caching for large-scale data sets. Develop CI/CD pipelines for automated data pipeline deployments, version control, and monitoring. Implement data virtualization techniques to provide seamless access to data across multiple storage systems. Collaborate with cross-functional teams, including data architects, business analysts, and DevOps teams, to align data engineering strategies with enterprise goals. Stay up to date with emerging data technologies and best practices, ensuring continuous improvement of Enterprise Data Fabric architectures. Must-Have Skills: Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies. Proficiency in workflow orchestration, performance tuning on big data processing. Strong understanding of AWS services Experience with Data Fabric, Data Mesh, or similar enterprise-wide data architectures. Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Good to have deep expertise in Biotech & Pharma industries Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications 9 to 12 years of Computer Science, IT or related field experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills.

Posted 3 weeks ago

Apply

9.0 - 11.0 years

8 - 12 Lacs

Hyderabad

Work from Office

We are looking for a Site Reliability Engineer/Cloud Engineer (SRE2) to work on the performance optimization, standardization, and automation of Amgen s critical infrastructure and systems. This role is crucial to ensuring the reliability, scalability, and cost-effectiveness of our production systems. The ideal candidate will work on operational excellence through automation, incident response, and proactive performance tuning, while also reducing infrastructure costs. You will work closely with cross-functional teams to establish best practices for service availability, efficiency, and cost control. Roles & Responsibilities: Lead and motivate a high-performing Test Automation team to deliver exceptional results. Provide expert guidance and mentorship to the Test Automation team, fostering a culture of innovation and best practices System Reliability, Performance Optimization & Cost Reduction: Ensure the reliability, scalability, and performance of Amgen s infrastructure, platforms, and applications. Proactively identify and resolve performance bottlenecks and implement long-term fixes. Continuously evaluate system design and usage to identify opportunities for cost optimization, ensuring infrastructure efficiency without compromising reliability. Automation & Infrastructure as Code (IaC): Drive the adoption of automation and Infrastructure as Code (IaC) across the organization to streamline operations, minimize manual interventions, and enhance scalability. Implement tools and frameworks (such as Terraform, Ansible, or Kubernetes) that increase efficiency and reduce infrastructure costs through optimized resource utilization. Standardization of Processes & Tools: Establish standardized operational processes, tools, and frameworks across Amgen s technology stack to ensure consistency, maintainability, and best-in-class reliability practices. Champion the use of industry standards to optimize performance and increase operational efficiency. Monitoring, Incident Management & Continuous Improvement: Implement and maintain comprehensive monitoring, alerting, and logging systems to detect issues early and ensure rapid incident response. Lead the incident management process to minimize downtime, conduct root cause analysis, and implement preventive measures to avoid future occurrences. Foster a culture of continuous improvement by leveraging data from incidents and performance monitoring. Collaboration & Cross-Functional Leadership: Partner with software engineering, and IT teams to integrate reliability, performance optimization, and cost-saving strategies throughout the development lifecycle. Act as a SME for SRE principles and advocate for best practices for assigned Projects. Capacity Planning & Disaster Recovery: Execute capacity planning processes to support future growth, performance, and cost management. Maintain disaster recovery strategies to ensure system reliability and minimize downtime in the event of failures. Must-Have Skills: Experienced with AWS/Azure Cloud Services Good knowledge on any visualization tools like Power BI , Tableau SQL/Python/Pyspark /Spark Knowledge Proficient in CI/CD (Jenkins/Gitlab), Observability, IAC, Gitops etc Experience with containerization (Docker) and orchestration tools (Kubernetes) to optimize resource usage and improve scalability. Ability to learn new technologies quickly. Strong problem-solving and analytical skills. Excellent communication and teamwork skills. Good-to-Have Skills: Knowledge of cloud-native technologies and strategies for cost optimization in multi-cloud environments. Familiarity with distributed systems, databases, and large-scale system architectures. Bachelor s degree in computer science and engineering preferred, other Engineering field is considered Databricks Knowledge/Exposure is good to have (need to upskill if hired) Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills. Basic Qualifications: Bachelor s degree in Computer Science, Engineering, or related field. 9-11+ years of experience in IT infrastructure, with at least 7+ years in Site Reliability Engineering or related fields.

Posted 3 weeks ago

Apply

5.0 - 8.0 years

16 - 18 Lacs

Hyderabad

Work from Office

Let s do this. Let s change the world. We are looking for highly motivated expert Data Engineer who can own the design & development of complex data pipelines, solutions and frameworks, with deep domain knowledge of Manufacturing and/or Process Development and/or Supply Chain in biotech or life sciences or pharma. The ideal candidate will be responsible to design, develop, and optimize data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. This role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Design, develop, and maintain complex ETL/ELT data pipelines in Databricks using PySpark, Scala, and SQL to process large-scale datasets Understand the biotech/pharma or related domains & build highly efficient data pipelines to migrate and deploy complex data across systems Design and Implement solutions to enable unified data access, governance, and interoperability across hybrid cloud environments Ingest and transform structured and unstructured data from databases (PostgreSQL, MySQL, SQL Server, MongoDB etc.), APIs, logs, event streams, images, pdf, and third-party platforms Ensuring data integrity, accuracy, and consistency through rigorous quality checks and monitoring Expert in data quality, data validation and verification frameworks Innovate, explore and implement new tools and technologies to enhance efficient data processing Proactively identify and implement opportunities to automate tasks and develop reusable frameworks Work in an Agile and Scaled Agile (SAFe) environment, collaborating with cross-functional teams, product owners, and Scrum Masters to deliver incremental value Use JIRA, Confluence, and Agile DevOps tools to manage sprints, backlogs, and user stories. Support continuous improvement, test automation, and DevOps practices in the data engineering lifecycle Collaborate and communicate effectively with the product teams, with cross-functional teams to understand business requirements and translate them into technical solutions Must-Have Skills: Deep domain knowledge of Manufacturing and/or Process Development and/or Supply Chain in biotech or life sciences or pharma. Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies. Proficiency in workflow orchestration, performance tuning on big data processing. Strong understanding of AWS services Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Data Engineering experience in Biotechnology or pharma industry Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications Master s degree and 3 to 4 + years of Computer Science, IT or related field experience OR Bachelor s degree and 5 to 8 + years of Computer Science, IT or related field experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills.

Posted 3 weeks ago

Apply

3.0 - 5.0 years

4 - 7 Lacs

Hyderabad

Work from Office

Let s do this. Let s change the world. We are looking for highly motivated expert Data Engineer who can own the design & development of complex data pipelines, solutions and frameworks. The ideal candidate will be responsible to design, develop, and optimize data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. This role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Design, develop, and maintain complex ETL/ELT data pipelines in Databricks using PySpark, Scala, and SQL to process large-scale datasets Understand the biotech/pharma or related domains & build highly efficient data pipelines to migrate and deploy complex data across systems Design and Implement solutions to enable unified data access, governance, and interoperability across hybrid cloud environments Ingest and transform structured and unstructured data from databases (PostgreSQL, MySQL, SQL Server, MongoDB etc.), APIs, logs, event streams, images, pdf, and third-party platforms Ensuring data integrity, accuracy, and consistency through rigorous quality checks and monitoring Expert in data quality, data validation and verification frameworks Innovate, explore and implement new tools and technologies to enhance efficient data processing Proactively identify and implement opportunities to automate tasks and develop reusable frameworks Work in an Agile and Scaled Agile (SAFe) environment, collaborating with cross-functional teams, product owners, and Scrum Masters to deliver incremental value Use JIRA, Confluence, and Agile DevOps tools to manage sprints, backlogs, and user stories. Support continuous improvement, test automation, and DevOps practices in the data engineering lifecycle Collaborate and communicate effectively with the product teams, with cross-functional teams to understand business requirements and translate them into technical solutions Must-Have Skills: Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies. Proficiency in workflow orchestration, performance tuning on big data processing. Strong understanding of AWS services Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Data Engineering experience in Biotechnology or pharma industry Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications Any Degree and 6-8 years of experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills.

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies