Jobs
Interviews

10844 Apache Jobs - Page 33

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

2 - 3 Lacs

Puducherry

Remote

We're Hiring! Backend Engineer (NestJS / Node.js) Location: Pondicherry / Chennai Employment Type: Full-time Experience: 3+ Years Department: Engineering / Backend Team Industry: Food Tech / E-commerce / SaaS Responsibilities ● Design, build, and maintain scalable backend services using NestJS and Node.js . ● Implement clean, testable REST APIs and microservices to support frontend, mobile, and third-party integrations. ● Integrate with MongoDB and MySQL databases effectively. ● Architect and build event-driven systems using Apache Kafka (or similar message brokers). ● Write reusable, modular, and performant code using Object-Oriented Programming (OOP) principles. ● Implement role-based access control , authentication (JWT, OAuth), and user management. ● Deploy and manage serverless functions using AWS Lambda and other AWS services. ● Collaborate with cross-functional teams: frontend, product, QA, and DevOps to deliver features end-to-end. ● Participate in code reviews, architecture discussions, and continuous improvement processes. Required Skills ● Strong proficiency in Node.js with NestJS framework. ● Experience with both MongoDB and MySQL (data modeling, query optimization). ● Solid understanding of microservices architecture and API versioning . ● Practical experience with Kafka or other event-streaming platforms. ● Working knowledge of AWS Lambda , API Gateway , and other AWS services . ● Deep understanding of OOP , design patterns, and MVC/MVVM architectures. ● Experience building and consuming RESTful APIs (GraphQL is a plus). ● Familiarity with CI/CD pipelines and containerization ( Docker ). Nice to Have ● Experience with frontend stack: React / Next.js (for better collaboration). ● Familiarity with testing frameworks (Jest, Mocha, Supertest). ● Exposure to DevOps practices , monitoring (Prometheus, Grafana), and infrastructure as code (Terraform, CDK). Previous experience in food tech, logistics, or e-commerce platforms. Ready to Join? Drop your CV at abdul.r@redblox.io Let’s build amazing things together! #NestJS #NodeJS #TypeScript #BackendDeveloper #DeveloperJobs #TechJobs #Hiring #NowHiring Job Types: Full-time, Permanent Pay: ₹20,000.00 - ₹25,000.00 per month Experience: Node.js with NestJS framework: 3 years (Required) React / Next.js : 3 years (Required) food tech, logistics, or e-commerce platforms: 3 years (Required) Work Location: Remote Application Deadline: 06/08/2025 Expected Start Date: 11/08/2025

Posted 1 week ago

Apply

0 years

6 - 10 Lacs

Gurgaon

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Lead all phases of data engineering, including requirements analysis, data modeling, pipeline design, development, and testing Design and implement performance and operational enhancements for scalable data systems Develop reusable data components, frameworks, and patterns to accelerate team productivity and innovation Conduct code reviews and provide feedback aligned with data engineering best practices and performance optimization Ensure data solutions meet standards for quality, scalability, security, and maintainability through rigorous design and code reviews Actively participate in Agile/Scrum ceremonies to deliver high-quality data solutions Collaborate with software engineers, data analysts, and business stakeholders across Agile teams Troubleshoot and resolve production issues post-deployment, designing robust solutions as needed Design, develop, test, and document data pipelines and ETL processes, enhancing existing components to meet evolving business needs Partner with architecture teams to drive forward-thinking data platform solutions Contribute to the design and architecture of secure, scalable, and maintainable data systems, clearly communicating design decisions to technical leadership Mentor junior engineers and collaborate on solution design with team members and product owners Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelor’s degree or equivalent experience Hands-on experience with cloud data services (AWS, Azure, or GCP) Experience building and maintaining ETL/ELT pipelines in enterprise environments Experience integrating with RESTful APIs Experience with Agile methodologies (Scrum, Kanban) Knowledge of data governance, security, privacy, and vulnerability management Understanding of authorization protocols (OAuth) and API integration Solid proficiency in SQL, NoSQL, and data modeling Proficiency with open-source tools such as Apache Flink, Iceberg, Spark, and PySpark Advanced Python skills for data engineering and data science (beyond Jupyter notebooks) Familiarity with big data technologies such as Spark, Hadoop, and Databricks Ability to build modular, testable, and reusable data solutions Solid grasp of data engineering concepts including: Data Catalogs Data Warehouses Data Lakes (especially Iceberg) Data Dictionaries Preferred Qualifications: Experience with GitHub, Terraform, and GitHub Actions Experience with real-time data streaming (Kafka, Kinesis) Experience with feature engineering and machine learning pipelines (MLOps) Knowledge of data warehousing platforms (Snowflake, Redshift, BigQuery) Familiarity with AWS native data engineering tools: Lambda, Lake Formation, Kinesis (Firehose, Data Streams) Glue (Data Catalog, ETL, Streaming) SageMaker, Athena, Redshift (including Spectrum) Demonstrated ability to mentor and guide junior engineers At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 1 week ago

Apply

5.0 years

3 - 9 Lacs

Hyderābād

On-site

Senior Site Reliability Engineer - JD As a Senior Site Reliability Engineer (SRE) , you will collaborate closely with our Development and IT teams to ensure the reliability, scalability, and performance of our applications. You will take ownership of setting and maintaining service-level objectives (SLOs), building robust monitoring and alerting, and continually improving our infrastructure and processes to maximize up time and deliver exceptional customer experience. This role operates at the intersection of development and operations, reinforcing best practices, automating solutions, and reducing toil across systems and platforms. About QualMinds: QualMinds is a global technology company dedicated to empowering clients on their digital transformation journey. We help our clients to design & develop world-class digital products, custom softwares and platforms. Our primary focus is delivering enterprise grade interactive software applications across web, desktop, mobile, and embedded platforms. Responsibilities: 1. Ensure Reliability & Performance : Own the observability of our systems, ensuring they meet established service-level objectives (SLOs) and maintain high availability. 2. Cloud & Container Orchestration : Deploy, configure, and manage resources on Google Cloud Platform (GCP) and Google Kubernetes Engine (GKE), focusing on secure and scalable infrastructures. 3. Infrastructure Automation & Tooling : Set up and maintain automated build and deployment pipelines; drive continuous improvements to reduce manual work and risks. 4. Monitoring & Alerting : Develop and refine comprehensive monitoring solutions (performance, uptime, error rates, etc.) to detect issues early and minimize downtime. 5. Incident Management & Troubleshooting : Participate in on-call rotations; manage incidents through resolution, investigate root causes, and create blameless postmortems to prevent recurrences. 6. Collaboration with Development : Partner with development teams to design and release services that are production-ready from day one, emphasizing reliability, scalability, and performance. 7. Security & Compliance : Integrate security best practices into system design and operations; maintain compliance with SOC 2 and other relevant standards. 8. Performance & Capacity Planning : Continuously assess system performance and capacity; propose and implement improvements to meet current and future demands. 9. Technical Evangelism : Contribute to cultivating a culture of reliability through training, documentation, and mentorship across the organization. Requirements : Bachelor’s degree in Computer Science, Business Administration, or relevant work experience. A minimum of 5+ years in an SRE, DevOps, or similar role in an IT environment, required . Hands-on experience with Microsoft SQL Clusters, Elasticsearch, Kubernetes, required . Deep familiarity with Windows or Linux environments and .NET or PHP stack applications, including IIS/Apache, SQL Server/MySQL, etc. Strong understanding of networking, firewalls, intrusion detection, and security best practices. Proven administrative experience with tools like GIT, TFS, Bitbucket, and Bamboo for continuous Integration, Delivery, and Deployment. Knowledge of automation testing tools such as SonarQube, Selenium, or comparable technologies. Experience with performance profiling, logging, metrics collection, and alerting tools. Competence in debugging solutions across diverse environments. Hands-on experience with GCP, AWS, or Azure, container orchestration (Kubernetes), and microservices-based architectures. Understanding of authentication, authorization, OAUTH, SAML, encryption (public/private key, symmetric, asymmetric), token validation, and SSO. Familiarity with security strategies to optimize performance while maintaining compliance (e.g., SOC 2). Willingness to participate in an on-call rotation and respond to system emergencies 24/7 when necessary. Monthly weekend rotation for Production Patching. A+, MCP, Dell certifications and Microsoft office expertise are a plus!

Posted 1 week ago

Apply

15.0 years

2 - 8 Lacs

Hyderābād

On-site

Job Description: . Lead Software Engineer – Enterprise Solutions & Transformation We are seeking an accomplished Lead Software Engineer with 15+ years of experience in IT and software development to architect, modernize, and deliver robust enterprise solutions. You will drive the transformation of legacy applications to modern cloud-native architectures, build and integrate scalable platforms, and champion best practices in DevOps, observability, and cross-functional collaboration. This technical leadership role is ideal for innovators passionate about enabling business agility through technology modernization and integration. Roles and Responsibilities Architect, design, develop, test, and document enterprise-grade software solutions, aligning with business needs, quality standards, and operational requirements. Lead transformation and modernization efforts: Evaluate and migrate legacy systems to modern, scalable, and maintainable architectures leveraging cloud-native technologies and microservices. Engineer integration solutions with platforms such as Apache Kafka, MuleSoft, and other middleware or messaging technologies to support seamless enterprise connectivity. Define and implement end-to-end architectures for both new and existing systems, ensuring scalability, security, performance, and maintainability. Collaborate with Solution and Enterprise Architects and portfolio stakeholders to analyze, plan, and realize features, enablers, and modernization roadmaps. Work closely with infrastructure engineers to provision, configure, and optimize cloud resources, especially within Azure (AKS, Cosmos DB, Event Hub). Champion containerization and orchestration using Docker and Azure Kubernetes Service (AKS) for efficient deployment and scaling. Drive observability: Define and implement system monitoring, logging, and alerting strategies using tools such as Prometheus, Grafana, and ELK Stack. Lead and participate in code and documentation reviews to uphold quality and engineering excellence. Mentor and coach engineers and developers, fostering technical growth and knowledge sharing. Troubleshoot and resolve complex issues across application, integration, and infrastructure layers. Advocate and implement modern DevOps practices: Build and maintain robust CI/CD pipelines, Infrastructure-as-Code, and automated deployments. Continuously evaluate and adopt new tools, technologies, and processes to improve system quality, delivery, and operational efficiency. Translate business needs and legacy constraints into actionable technical requirements and provide accurate estimates for both new builds and modernization projects. Ensure NFRs (scalability, security, availability, performance) are defined, implemented, and maintained across all solutions. Collaborate cross-functionally with DevOps, support, and peer teams to ensure operational excellence and smooth transformation initiatives. Required Qualifications Bachelor’s or master’s degree in computer science, Information Systems, or a related field. 15+ years of experience in IT and software development roles, with a track record of delivering enterprise-scale solutions. 5+ years of hands-on experience building Java-based, high-volume/high-transaction applications. 5+ years of experience with Java, Spring, and RESTful API development. 3+ years of experience in modernizing legacy applications or leading transformation initiatives. 3+ years of experience in performance tuning, application monitoring, and troubleshooting. 3+ years of experience with integration platforms (Kafka, MuleSoft, RabbitMQ, etc.). 2+ years of experience architecting solutions and leading technical design for enterprise systems. Experience working with container orchestration, especially Azure Kubernetes Service (AKS). Preferred Qualifications 3+ years of experience in microservices architecture and system design. 3+ years in technical leadership or mentoring roles. 3+ years hands-on with cloud platforms (Azure, AWS, GCP, OpenStack). Experience with cloud resource provisioning (ARM templates, Terraform, Ansible, Chef). Strong DevOps skills: CI/CD pipelines with GitHub, Maven, Jenkins, Nexus, SonarQube. Advanced knowledge of observability (Prometheus, Grafana, ELK). Proficiency in Unix/Linux command line and shell scripting. Expert in asynchronous messaging, stream processing, and event-driven architectures. Experience in Agile/Scrum/Kanban environments. Familiarity with front-end technologies (HTML5, JavaScript frameworks, CSS3). Certifications in Java, Spring, Azure, or relevant integration/cloud technologies. Excellent communication skills for both technical and business audiences. Technical Skills Languages & Frameworks: Java, Groovy, Spring (Boot, Cloud), REST Integration & Messaging: Kafka, MuleSoft, RabbitMQ, MQ, Redis, Hazelcast Legacy Modernization: Refactoring, rearchitecting, and migrating monolithic or legacy applications to modern platforms. Databases: NoSQL (Cassandra, Cosmos DB), SQL Monitoring & Observability: Prometheus, Grafana, ELK Stack Orchestration: Docker, AKS (Azure Kubernetes Service) Cloud Platforms: Azure (Event Hub, Cosmos DB, AKS), AWS, GCP, OpenStack IaC & DevOps: Terraform, Ansible, Chef, Jenkins, Maven, Nexus, SonarQube, Git, Jira Scripting & Front-End: Node.js, React.js, Python, R Why Join Us? Lead modernization and transformation of critical business systems to future-ready cloud architectures. Architect and deliver enterprise-scale, highly integrated, observable solutions. Mentor and inspire a talented engineering team. Shape the organization’s technical direction in cloud, integration, and DevOps. Thrive in a collaborative, innovative, and growth-focused environment. Enjoy competitive compensation and opportunities for career advancement. Weekly Hours: 40 Time Type: Regular Location: IND:AP:Hyderabad / Argus Bldg 4f & 5f, Sattva, Knowledge City- Adm: Argus Building, Sattva, Knowledge City It is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities. AT&T is a fair chance employer and does not initiate a background check until an offer is made.

Posted 1 week ago

Apply

4.0 years

6 - 9 Lacs

Hyderābād

On-site

About the Role: Grade Level (for internal use): 10 The Team : Team is responsible for the development of tools to collect data from various sources. It is the backbone of all the data being presented to clients. The team is responsible for modernizing and migrating the internal platform utilizing latest technologies. The Impact : As a Software Developer III, you will be part of development team that manages multi-terabyte data using latest web, cloud and big data technologies. You will be part of a heavy data intensive environment. What’s in it for you : It’s a fast-paced agile environment that deals with huge volumes of data, so you’ll have an opportunity to sharpen your data skills and work on emerging technology stack. Responsibilities : Design, and implement software components for data processing systems. Perform analysis and articulate solutions. Design underlying engineering for use in multiple product offerings supporting a large volume of end-users. Develop project plans with task breakdowns and estimates. Manage and improve existing solutions. Solve a variety of complex problems and figure out possible solutions, weighing the costs and benefits. What We’re Looking For : Basic Qualifications : B.S. in Computer Science or equivalent 4+ years of relevant experience Expert in OOPs, .Net and C# concepts Expert in server side programming using ASP.Net Or Python Experience implementing: Web Services (with WCF, RESTful JSON, SOAP, TCP) Experience with Big Data platforms such as Apache Spark Proficient with software development lifecycle (SDLC) methodologies like Scaled Agile, Test-driven development Good experience with developing solutions involving relational database technologies on SQL . Server platform, stored procedure programming experience using Transact SQL. Passionate, smart, and articulate developer Able to work well individually and with a team Strong problem-solving skills Good work ethic, self-starter, and results-oriented Preferred Qualifications : Experience working in cloud computing environments such as AWS Experience with large-scale messaging systems What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 318313 Posted On: 2025-07-28 Location: Hyderabad, Telangana, India

Posted 1 week ago

Apply

8.0 - 13.0 years

7 - 8 Lacs

Hyderābād

On-site

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Sr Data Engineer What you will do Let’s do this. Let’s change the world. In this vital role you will be responsible for "Run" and "Build" project portfolio execution, collaborate with business partners and other IS service leads to deliver IS capability and roadmap in support of business strategy and goals. Real world data analytics, visualization and advanced technology play a vital role in supporting Amgen’s industry leading innovative Real World Evidence approaches. The role is responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and implementing data governance initiatives and visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Collaborate and communicate effectively with product teams What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master's degree / Bachelor's degree and 8 to 13 years of experience in Computer Science, IT or related field Must-Have Skills: Hands on experience with bigdata technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on bigdata processing Hands on experience with various Python/R packages for EDA, feature engineering and machine learning model training Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools Excellent problem-solving skills and the ability to work with large, complex datasets Strong understanding of data governance frameworks, tools, and standard processes. Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Preferred Qualifications: Good-to-Have Skills: Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Knowledge of Python/R, Databricks, SageMaker, OMOP. Professional Certifications: Certified Data Engineer / Data Analyst (preferred on Databricks or cloud environments) Certified Data Scientist (preferred on Databricks or Cloud environments) Machine Learning Certification (preferred on Databricks or Cloud environments) SAFe for Teams certification (preferred) Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 1 week ago

Apply

2.0 - 6.0 years

7 - 8 Lacs

Hyderābād

On-site

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Associate Data Engineer What you will do Let’s do this. Let’s change the world. In this vital role we seek a skilled Data Engineer to build and optimize our data infrastructure. As a key contributor, you will collaborate closely with cross-functional teams to design and implement robust data pipelines that efficiently extract, transform, and load data into our AWS-based data lake and data warehouse. Your expertise will be instrumental in empowering data-driven decision making through advanced analytics and predictive modeling. Roles & Responsibilities: Building and optimizing data pipelines, data warehouses, and data lakes on the AWS and Databricks platforms. Managing and maintaining the AWS and Databricks environments. Ensuring data integrity, accuracy, and consistency through rigorous quality checks and monitoring. Maintain system uptime and optimal performance Working closely with cross-functional teams to understand business requirements and translate them into technical solutions. Exploring and implementing new tools and technologies to enhance ETL platform performance. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Bachelor’s degree and 2 to 6 years. Functional Skills: Must-Have Skills: Proficient in SQL for extracting, transforming, and analyzing complex datasets from both relational and columnar data stores. Proven ability to optimize query performance on big data platforms. Proficient in leveraging Python, PySpark, and Airflow to build scalable and efficient data ingestion, transformation, and loading processes. Ability to learn new technologies quickly. Strong problem-solving and analytical skills. Excellent communication and teamwork skills. Good-to-Have Skills: Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with Apache Spark, Apache Airflow Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Experienced with AWS, GCP or Azure cloud services What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 1 week ago

Apply

1.5 - 2.0 years

0 Lacs

India

On-site

Qualification: Education: Bachelor’s degree in any field. Experience: Minimum 1.5-2 years of experience in data engineering support or a related role, with hands-on exposure to AWS. Technical Skills: Strong understanding of AWS services, including but not limited to S3, EC2, CloudWatch, and IAM. Proficiency in SQL with the ability to write, optimize, and debug queries for data analysis and issue resolution. Hands-on experience with Python for scripting and automation; familiarity with Shell scripting is a plus. Good understanding of ETL processes and data pipelines. Exposure to data warehousing concepts; experience with Amazon Redshift or similar platforms preferred. Working knowledge of orchestration tools, especially Apache Airflow – including monitoring and basic troubleshooting. Soft Skills: Strong communication and interpersonal skills for effective collaboration with cross-functional teams and multi-cultural teams. Problem-solving attitude with an eagerness to learn and adapt quickly. Willingness to work in a 24x7 support environment on a 6-day working schedule, with rotational shifts as required. Language Requirements: Must be able to read and write in English proficiently.

Posted 1 week ago

Apply

0 years

4 - 9 Lacs

Hyderābād

On-site

AI-First. Future-Driven. Human-Centered. At OpenText, AI is at the heart of everything we do—powering innovation, transforming work, and empowering digital knowledge workers. We're hiring talent that AI can't replace to help us shape the future of information management. Join us. Your Impact: An OpenText Content Server Consultant is responsible for the technical delivery of the xECM based solutions. Such delivery activities encompass development, testing, deployment and documentation of specific software components – either providing extensions to specific items of core product functionality or implementing specific system integration components. This role has a heavy deployment and administration emphasis. Engagements are usually long term, but some relatively short ones requiring only specific services like an upgrade or a migration also happen. The nature of work may include full application lifecycle activities right from development, deployment/provisioning, testing, migration, decommissioning and ongoing run& maintain (upgrades, patching etc.) support.The role is customer facing and requires excellent interpersonal skills with the ability to communicate to a wide range of stake holders (internally and externally), both verbally and in writing. What the Role offers: Work within an OpenText technical delivery team in order to: Participate and contribute to deployment activities. Participate in the day to day administration of the systems, including Incident& Problem Management Participate in planning and execution of new implementations, upgrades and patching activities. Participate in the advanced configuration of ECM software components, in line with project and customer time scales. Actively contribute in automating provisioning, patching and upgrade activities where possible to achieve operational efficiencies. Perform code reviews and periodic quality checks to ensure delivery quality is maintained. Prepare, maintain and submit activity/progress reports and time recording/management reports in accordance with published procedures. Keep project managers informed of activities and alert of any issues promptly. Provide inputs as part of engagement closure on project learnings and suggest improvements. Utilize exceptional written and verbal communication skills while supporting customers via web, telephone, or email, while demonstrating a high level of customer focus and empathy. Respond to and solve customer technical requests, show an understanding of the customer's managed hosted environment and applications within the Open Text enabling resolution of complex technical issues. Document or Implement proposed solutions. Respond to and troubleshoot alerts from monitoring of applications, servers and devices sufficient to meet service level agreements Collaborating on cross-team and cross-product technical issues with a variety of resources including Product support, IT, and Professional Services. What you need to succeed: Well versed with deployment, administration and troubleshooting of the OpenText xECM platform and surrounding components (Content Server, Archive Center, Brava, OTDS, Search& Indexing) and integrations with SAP, SuccessFactors, Salesforce. Good experience/knowledge on following: Experience working in an ITIL aligned service delivery organisation. Knowledge of Windows, UNIX, and Application administration skills in a TCP/IP networked environment. Experience working with relational DBMS (PostgreSQL/Postgres, Oracle, MS SQL Server, mySQL). Independently construct moderate complexity SQL’s without guidance. Programming/scripting is highly desirable, (ie. Oscript, Java, JavaScript, PowerShell, Bash etc.) Familiarity with configuration and management of web/application servers (IIS, Apache, Tomcat, JBoss, etc.). Good understanding of object-oriented programming, Web Services, LDAP configuration. Experience in installing and configuring xECM in HA and knowledge in DR setup/drill. Experince in patching, major upgrades and data migration activities. Candidate should possess: Team player Customer Focus and Alertness Attention to detail Always learning Critical Thinking Highly motivated Good Written and Oral Communication Knowledge sharing, blogs OpenText's efforts to build an inclusive work environment go beyond simply complying with applicable laws. Our Employment Equity and Diversity Policy provides direction on maintaining a working environment that is inclusive of everyone, regardless of culture, national origin, race, color, gender, gender identification, sexual orientation, family status, age, veteran status, disability, religion, or other basis protected by applicable laws. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please submit a ticket atAsk HR. Our proactive approach fosters collaboration, innovation, and personal growth, enriching OpenText's vibrant workplace.

Posted 1 week ago

Apply

5.0 - 9.0 years

7 - 8 Lacs

Hyderābād

On-site

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Data Engineer What you will do Let’s do this. Let’s change the world. In this vital role you will be responsible for "Run" and "Build" project portfolio execution, collaborate with business partners and other IS service leads to deliver IS capability and roadmap in support of business strategy and goals. Real world data analytics, visualization and advanced technology play a vital role in supporting Amgen’s industry leading innovative Real World Evidence approaches. The role is responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data governance initiatives and visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Collaborate and communicate effectively with product teams What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master's degree / Bachelor's degree and 5 to 9 years' of experience in Computer Science, IT or related field Must-Have Skills: Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing Hands on experience with various Python/R packages for EDA, feature engineering and machine learning model training Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools Excellent problem-solving skills and the ability to work with large, complex datasets Strong understanding of data governance frameworks, tools, and best practices. Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Preferred Qualifications: Good-to-Have Skills: Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Knowledge of Python/R, Databricks, SageMaker, OMOP. Professional Certifications: Certified Data Engineer / Data Analyst (preferred on Databricks or cloud environments) Certified Data Scientist (preferred on Databricks or Cloud environments) Machine Learning Certification (preferred on Databricks or Cloud environments) SAFe for Teams certification (preferred) Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 1 week ago

Apply

0 years

4 - 9 Lacs

Bengaluru

Remote

Your opportunity Do you love the transformative impact data can have on a business? Are you motivated to push for results and overcome all obstacles? Then we have a role for you. What you'll do Lead the building of scalable, fault tolerant pipelines with built in data quality checks that transform, load and curate data from various internal and external systems Provide leadership to cross-functional initiatives and projects. Influence architecture design and decisions. Build cross-functional relationships with Data Scientists, Product Managers and Software Engineers to understand data needs and deliver on those needs. Improve engineering processes and cross-team collaboration. Ruthlessly prioritize work to align with company priorities. Provide thought leadership to grow and evolve DE function and implementation of SDLC best practices in building internal-facing data products by staying up-to-date with industry trends, emerging technologies, and best practices in data engineering This role requires Experience in BI and Data Warehousing. Strong experience with dbt, Airflow and snowflake Experience with Apache Iceberg tables Experience and knowledge of building data-lakes in AWS (i.e. Spark/Glue, Athena), including data modeling, data quality best practices, and self-service tooling. Experience mentoring data professionals from junior to senior levels Demonstrated success leading cross functional initiatives Passionate about data quality, code quality, SLAs and continuous improvement Deep understanding of data system architecture Deep understanding of ETL/ELT patterns Development experience in at least one object-oriented language (Python,R,Scala, etc.). Comfortable with SQL and related tooling Bonus points if you have Experience with Observability Please note that visa sponsorship is not available for this position. Fostering a diverse, welcoming and inclusive environment is important to us. We work hard to make everyone feel comfortable bringing their best, most authentic selves to work every day. We celebrate our talented Relics’ different backgrounds and abilities, and recognize the different paths they took to reach us – including nontraditional ones. Their experiences and perspectives inspire us to make our products and company the best they can be. We’re looking for people who feel connected to our mission and values, not just candidates who check off all the boxes. If you require a reasonable accommodation to complete any part of the application or recruiting process, please reach out to resume@newrelic.com. We believe in empowering all Relics to achieve professional and business success through a flexible workforce model. This model allows us to work in a variety of workplaces that best support our success, including fully office-based, fully remote, or hybrid. Our hiring process In compliance with applicable law, all persons hired will be required to verify identity and eligibility to work and to complete employment eligibility verification. Note: Our stewardship of the data of thousands of customers’ means that a criminal background check is required to join New Relic. We will consider qualified applicants with arrest and conviction records based on individual circumstances and in accordance with applicable law including, but not limited to, the San Francisco Fair Chance Ordinance. Headhunters and recruitment agencies may not submit resumes/CVs through this website or directly to managers. New Relic does not accept unsolicited headhunter and agency resumes, and will not pay fees to any third-party agency or company that does not have a signed agreement with New Relic. Candidates are evaluated based on qualifications, regardless of race, religion, ethnicity, national origin, sex, sexual orientation, gender expression or identity, age, disability, neurodiversity, veteran or marital status, political viewpoint, or other legally protected characteristics. Review our Applicant Privacy Notice at https://newrelic.com/termsandconditions/applicant-privacy-policy

Posted 1 week ago

Apply

2.0 years

0 Lacs

Bengaluru

On-site

Function Tech Job posted on Jul 28, 2025 Employee Type Permanent Experience range (Years) 2 years - 4 years Job Summary We are looking for a highly skilled Big Data & ETL Tester to join our data engineering and analytics team. The ideal candidate will have strong experience in PySpark , SQL , and Python , with a deep understanding of ETL pipelines , data validation , and cloud-based testing on AWS . Familiarity with data visualization tools like Apache Superset or Power BI is a strong plus. You will work closely with our data engineering team to ensure data availability, consistency, and quality across complex data pipelines, and help transform business requirements into robust data testing frameworks. Key Responsibilities Collaborate with big data engineers to validate data pipelines and ensure data integrity across ingestion, processing, and transformation stages. Write complex PySpark and SQL queries to test and validate large-scale datasets. Perform ETL testing , covering schema validation, data completeness, accuracy, transformation logic, and performance testing. Conduct root cause analysis of data issues using structured debugging approaches. Build automated test scripts in Python for regression, smoke, and end-to-end data testing. Analyze large datasets to track KPIs and performance metrics supporting business operations and strategic decisions. Work with data analysts and business teams to translate business needs into testable data validation frameworks. Communicate testing results, insights, and data gaps via reports or dashboards (Superset/Power BI preferred). Identify and document areas of improvement in data processes and advocate for automation opportunities. Maintain detailed documentation of test plans, test cases, results, and associated dashboards. Required Skills and Qualifications 2+ years of experience in big data testing and ETL testing . Strong hands-on skills in PySpark , SQL , and Python . Solid experience working with cloud platforms , especially AWS (S3, EMR, Glue, Lambda, Athena, etc.) . Familiarity with data warehouse and lakehouse architectures. Working knowledge of Apache Superset , Power BI , or similar visualization tools. Ability to analyze large, complex datasets and provide actionable insights. Strong understanding of data modeling concepts, data governance, and quality frameworks. Experience with automation frameworks and CI/CD for data validation is a plus. Preferred Qualifications Experience with Airflow , dbt , or other data orchestration tools . Familiarity with data cataloging tools (e.g., AWS Glue Data Catalog). Prior experience in a product or SaaS-based company with high data volume environments. Why Join Us? Opportunity to work with cutting-edge data stack in a fast-paced environment. Collaborate with passionate data professionals driving real business impact. Flexible work environment with a focus on learning and innovation.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

This role is for one of Weekday's clients Location: Chennai, Pune, Kochi JobType: full-time Requirements We are looking for a skilled and versatile JAVA FSD AWS Developer to join our client's Agile/SAFe development teams. In this role, you will participate in the design, development, integration, and deployment of enterprise-grade applications built on both modern cloud-native architectures (AWS) . You will ensure high-quality, testable, secure, and compliant code while collaborating in a fast-paced Agile setup Key Responsibilities: Agile Participation & Code Quality Active involvement in Scrum and SAFe team events, including planning, daily stand-ups, reviews, and retrospectives Create and validate testable features, ensuring coverage of both functional and non-functional requirements Deliver high-quality code through practices like Pair Programming and Test-Driven Development (TDD) Maintain operability, deployability, and integration readiness of application increments Ensure full compliance with internal frameworks such as PITT and established security protocols (SAST, DAST). Development & Integration Develop software solutions using a diverse tech stack: TypeScript, Java, SQL, Python, COBOL, Shell scripting Spring Boot, Angular, Node.js, Hibernate Work across multiple environments and technologies including Linux, Apache, Tomcat, Elasticsearch, IBM DB2 Build and maintain web applications, backend services, and APIs using modern and legacy technologies. AWS & Cloud Infrastructure Hands-on development and deployment with AWS services,EKS, ECR, IAM, SQS, SES, S3, CloudWatch Develop Infrastructure as Code using Terraform Ensure system reliability, monitoring, and traceability using tools like Splunk, UXMon, and AWS CloudWatch. Systems & Batch Integration Work with Kafka, particularly Streamzilla Kafka from PAG, for high-throughput messaging Design and consume both REST and SOAP APIs for integration with third-party and internal systems Manage and automate batch job scheduling via IBM Tivoli Workload Scheduler (TWS/OPC) and HostJobs Required Skills & Experience: 5+ years of experience in full stack development, DevOps, and mainframe integration Strong programming experience in: Languages: TypeScript, Java, Python, COBOL, Shell scripting Frameworks & Tools: Angular, Spring Boot, Hibernate, Node.js Databases: SQL, IBM DB2, Elasticsearch Proficient in AWS Cloud Services including container orchestration, IAM, S3, CloudWatch, SES, SQS, and Terraform Strong understanding of API development and integration (REST & SOAP) Experience in secure software development using SAST/DAST, TDD, and compliance frameworks (e.g., PITT) Familiarity with Kafka messaging systems, particularly Streamzilla Kafka Monitoring and observability experience using tools like Splunk, UXMon, or equivalents Preferred Qualifications: Experience with PCSS Toolbox or similar enterprise tooling Prior exposure to highly regulated industries (e.g., automotive, banking, insurance) Bachelor's or Master's degree in Computer Science, Information Technology, or related fields Certifications in AWS or DevOps tools are a plus

Posted 1 week ago

Apply

12.0 years

0 Lacs

Noida

On-site

Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! Role Summary Digital Experience (DX) ( https://www.adobe.com/experience-cloud.html) is a USD 4B+ business serving the needs of enterprise businesses including 95%+ of fortune 500 organizations. Adobe Experience Manager, within Adobe DX is the world’s largest CMS platform, is a solution that helps enterprises create, manage, and deliver digital experiences across various channels like websites, mobile apps, and digital signage. According to a Forrester report, Experience Manager is the most robust CMS on the market. More than 128,000 websites rely on the agile setup of Experience Manager to manage their content. We are looking for strong and passionate engineers/managers to join our team as we scale the business by building the next gen products and adding customer value to our existing offerings. If you’re passionate about innovative technology, then we would be excited to talk to you! What you'll Do Mentor and guide a high-performing engineering team to deliver outstanding results Lead the technical design, vision, and implementation strategy for next-gen Multi-cloud services Partner with global leaders to help craft product architecture, roadmap, and release plans Drive strategic decisions ensuring successful project delivery and high code quality Apply standard methodologies and coding patterns to develop maintainable and modular solutions Optimize team efficiency through innovative engineering processes and teamwork models Attract, hire, and retain top talent while encouraging a positive, collaborative culture Lead discussions on emerging industry technologies and influence product direction What you need to succeed 12+ years of experience in software development with a proven leadership track record, min 3 years as manager leading a team of high performing full stack engineers. Proficiency in Java/JSP for backend development and experience with frontend technologies like React, Angular, or JQuery Experience with cloud platforms such as AWS or Azure Proficiency in version control, CI/CD pipelines, and DevOps practices Familiarity with Docker, Kubernetes, and Infrastructure as Code tools Experience with Web-Sockets, or event-driven architectures Deep understanding of modern software architecture, including microservices and API-first development Proven usage of AI/GenAI engineering productivity tools like github copilot, cursor. Practical experience with Python would be helpful. Exposure to open source contribution models to Apache, Linux foundation projects or any other 3rd party frameworks would be an added advantage. Strong problem-solving, analytical, and decision-making skills Excellent communication, collaboration, and management skills Passion for high-quality software and improving engineering processes BS/MS or equivalent experience in Computer Science or a related field Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Overview: TekWissen is a global workforce management provider throughout India and many other countries in the world. The below clientis a global company with shared ideals and a deep sense of family. From our earliest days as a pioneer of modern transportation, we have sought to make the world a better place – one that benefits lives, communities and the planet Job Title: Software Engineer Senior Location: Chennai Work Type: Hybrid Position Description: As part of the client's DP&E Platform Observability team, you'll help build a top-tier monitoring platform focused on latency, traffic, errors, and saturation. You'll design, develop, and maintain a scalable, reliable platform, improving MTTR/MTTX, creating dashboards, and optimizing costs. Experience with large systems, monitoring tools (Prometheus, Grafana, etc.), and cloud platforms (AWS, Azure, GCP) is ideal. The focus is a centralized observability source for data-driven decisions and faster incident response. Skills Required: Spring Boot, Angular, Cloud Computing Skills Preferred: Google Cloud Platform - Biq Query, Data Flow, Dataproc, Data Fusion, TERRAFORM, Tekton,Cloud SQL, AIRFLOW, POSTGRES, Airflow PySpark, Python, API Experience Required: 5+ years of overall experience with proficiency in Java, angular or any javascript technology with experience in designing and deploying cloud-based data pipelines and microservices using GCP tools like BigQuery, Dataflow, and Dataproc. Ability to leverage best in-class data platform technologies (Apache Beam, Kafka,...) to deliver platform features, and design & orchestrate platform services to deliver data platform capabilities. Service-Oriented Architecture and Microservices: Strong understanding of SOA, microservices, and their application within a cloud data platform context. Develop robust, scalable services using Java Spring Boot, Python, Angular, and GCP technologies. Full-Stack Development: Knowledge of front-end and back-end technologies, enabling collaboration on data access and visualization layers (e.g., React, Node.js). Design and develop RESTful APIs for seamless integration across platform services. Implement robust unit and functional tests to maintain high standards of test coverage and quality. Database Management: Experience with relational (e.g., PostgreSQL, MySQL) and NoSQL databases, as well as columnar databases like BigQuery. Data Governance and Security: Understanding of data governance frameworks and implementing RBAC, encryption, and data masking in cloud environments. CI/CD and Automation: Familiarity with CI/CD pipelines, Infrastructure as Code (IaC) tools like Terraform, and automation frameworks. Manage code changes with GitHub and troubleshoot and resolve application defects efficiently. Ensure adherence to SDLC best practices, independently managing feature design, coding, testing, and production releases. Problem-Solving: Strong analytical skills with the ability to troubleshoot complex data platform and microservices issues. Experience Preferred: GCP Data Engineer, GCP Professional Cloud Education Required: Bachelor's Degree TekWissen® Group is an equal opportunity employer supporting workforce diversity.

Posted 1 week ago

Apply

0 years

0 Lacs

Ahmedabad

On-site

Lead and mentor a team of Linux system administrators, assigning tasks and monitoring performance. Design, deploy, maintain, and optimize Linux-based infrastructure (RedHat, CentOS, Oracle Linux, Ubuntu). Manage critical services such as Apache, Nginx, MySQL/MariaDB, etc. Configure and maintain monitoring tools (e.g., Nagios, Zabbix, Prometheus, Grafana). Implement and enforce security practices: patching, hardening, firewalls (iptables/nftables), SELinux. Oversee backup and disaster recovery processes. Plan and execute migrations, upgrades, and performance tuning. Collaborate with cross-functional teams (Network, DevOps, Development) to support infrastructure needs. Define and document policies, procedures, and best practices. Respond to incidents and lead root cause analysis for system outages or degradations. Maintain uptime and SLAs for production environments. Experience with virtualization (KVM, VMware, Proxmox) and cloud platforms (AWS, GCP, Azure, or private cloud). Solid understanding of TCP/IP, DNS, DHCP, VPN, and other network services Hands-on experience working on Firewall like Sophos, fortigate Strong problem-solving and incident management skills. Job Types: Full-time, Permanent Application Question(s): What is your Notice Period? Work Location: In person Speak with the employer +91 7490078248

Posted 1 week ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Position Summary... Demonstrates up-to-date expertise and applies this to the development, execution, and improvement of action plans by providing expert advice and guidance to others in the application of information and best practices; supporting and aligning efforts to meet customer and business needs; and building commitment for perspectives and rationales. Provides and supports the implementation of business solutions by building relationships and partnerships with key stakeholders; identifying business needs; determining and carrying out necessary processes and practices; monitoring progress and results; recognizing and capitalizing on improvement opportunities; and adapting to competing demands, organizational changes, and new responsibilities. Models compliance with company policies and procedures and supports company mission, values, and standards of ethics and integrity by incorporating these into the development and implementation of business plans; using the Open Door Policy; and demonstrating and assisting others with how to apply these in executing business processes and practices. What you'll do... About Team: Marketplace Engineering team is at the forefront of building core platforms and services to enable Walmart to deliver vast selection at competitive prices and with best in class post-order experience by enabling third-party sellers to list, sell and manage their products to our customers on walmart.com. We do this by managing the entire seller lifecycle, monitoring customer experience, and delivering high-value insights to our sellers to help them plan their assortment, price, inventory. The team also actively collaborates with partner platform teams to ensure we continue to deliver the best experience to our sellers and our customers. This role will be focused on the Marketplace What you will do: As a Software Engineer III for Walmart, you’ll have the opportunity to: Develop intuitive software that meets and exceeds the needs of the customer and the company. You also get to collaborate with team members to develop best practices and requirements for the software. In this role it would be important for you to professionally maintain all codes and create updates regularly to address the customer’s and company’s concerns. You will show your skills in analysing and testing programs/products before formal launch to ensure flawless performance. Troubleshooting coding problems quickly and efficiently will offer you a chance to grow your skills in a high-pace, high-impact environment. Software security is of prime importance and by developing programs that monitor sharing of private information, you will be able to add tremendous credibility to your work. You will also be required to seek ways to improve the software and its effectiveness. Adhere to Company policies, procedures, mission, values, and standards of ethics and integrity What you will bring: B.E./B. Tech/MS/MCA in Computer Science or related technical field. Minimum 2+ years of object-oriented programming experience in Java. Excellent computer systems fundamentals, DS/Algorithms and problem-solving skills. Hands-on experience in building web based Java EE services/applications and Kafka, Apache Camel, RESTful Web-Services, Spring, Hibernate, Splunk, Caching. Excellent organisation, communication and interpersonal skills. Large scale distributed services experience, including scalability and fault tolerance. Exposure to cloud infrastructure, such as Open Stack, Azure, GCP, or AWS Exposure to build, CI/CD & deployment pipelines and related technologies like Kubernetes, Docker, Jenkins etc. A continuous drive to explore, improve, enhance, automate and optimize systems and tools. Experience in systems design and distributed systems. Exposure to SQL/NoSQL data stores like Cassandra, Elastic, Mongo etc. About Walmart Global Tech Imagine working in an environment where one line of code can make life easier for hundreds of millions of people. That’s what we do at Walmart Global Tech. We’re a team of software engineers, data scientists, cybersecurity expert's and service professionals within the world’s leading retailer who make an epic impact and are at the forefront of the next retail disruption. People are why we innovate, and people power our innovations. We are people-led and tech-empowered. We train our team in the skillsets of the future and bring in experts like you to help us grow. We have roles for those chasing their first opportunity as well as those looking for the opportunity that will define their career. Here, you can kickstart a great career in tech, gain new skills and experience for virtually every industry, or leverage your expertise to innovate at scale, impact millions and reimagine the future of retail. Flexible, hybrid work We use a hybrid way of working with primary in office presence coupled with an optimal mix of virtual presence. We use our campuses to collaborate and be together in person, as business needs require and for development and networking opportunities. This approach helps us make quicker decisions, remove location barriers across our global team, be more flexible in our personal lives. Benefits Beyond our great compensation package, you can receive incentive awards for your performance. Other great perks include a host of best-in-class benefits maternity and parental leave, pto, health benefits, and much more. Belonging We aim to create a culture where every associate feels valued for who they are, rooted in respect for the individual. Our goal is to foster a sense of belonging, to create opportunities for all our associates, customers and suppliers, and to be a Walmart for everyone. At Walmart, our vision is "everyone included." by fostering a workplace culture where everyone is—and feels—included, everyone wins. Our associates and customers reflect the makeup of all 19 countries where we operate. By making Walmart a welcoming place where all people feel like they belong, we’re able to engage associates, strengthen our business, improve our ability to serve customers, and support the communities where we operate. Equal opportunity employer Walmart, inc., is an equal opportunities employer – by choice. We believe we are best equipped to help our associates, customers and the communities we serve live better when we really know them. That means understanding, respecting and valuing unique styles, experiences, identities, ideas and opinions – while being inclusive of all people. Minimum Qualifications... Outlined below are the required minimum qualifications for this position. If none are listed, there are no minimum qualifications. Minimum Qualifications:Option 1: Bachelor's degree in computer science, information technology, engineering, information systems, cybersecurity, or related area and 2years’ experience in software engineering or related area at a technology, retail, or data-driven company. Option 2: 4 years’ experience in software engineering or related area at a technology, retail, or data-driven company. Preferred Qualifications... Outlined below are the optional preferred qualifications for this position. If none are listed, there are no preferred qualifications. Certification in Security+, Network+, GISF, GSEC, CISSP, or CCSP, Master’s degree in Computer Science, Information Technology, Engineering, Information Systems, Cybersecurity, or related area Primary Location... BLOCK- 1, PRESTIGE TECH PACIFIC PARK, SY NO. 38/1, OUTER RING ROAD KADUBEESANAHALLI, , India R-2237196

Posted 1 week ago

Apply

9.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description: About Us At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities, and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview* The Data Analytics Strategy platform and decision tool team is responsible for Data strategy for entire CSWT and development of platforms which supports the Data Strategy. Data Science platform, Graph Data Platform, Enterprise Events Hub are key platforms of Data Platform initiative. Job Description* We're looking for a highly skilled Container Platform Engineer to architect, implement, and manage our cloud-agnostic Data Science and Analytical Platform. Leveraging OpenShift (or other Kubernetes distributions) as the core container orchestration layer, you'll build a scalable and secure infrastructure vital for ML workloads and shared services. This role is key to establishing a robust hybrid architecture, paving the way for seamless future migration to AWS, Azure, or GCP. This individual will closely with data scientists, MLOps engineers, and platform teams to enable efficient model development, versioning, deployment, and monitoring within a multi-tenant environment. Responsibilities* Responsible for developing risk solutions to meet enterprise-wide regulatory requirements. Performs Monitoring and managing of large systems/platforms efficiently. Contributes to story refinement and definition of requirements. Participates in estimating work necessary to realize a story/requirement through the delivery lifecycle. Mentor team members, advocate best practices, and promote a culture if continuous improvement and innovation in engineering processes. Develop efficient utilities, automation frameworks, data science platforms that can be utilized across multiple Data Science teams. Propose/Build variety of efficient Data pipelines to support the ML Model building & deployment. Propose/Build automated deployment pipelines to enable self-help continuous deployment process for the Data Science teams. Analyze, understand, execute and resolve the issues in user scripts / model / code. Perform release and upgrade activities as required. Well versed in the open-source technology and aware of emerging 3rd party technology & tools in AI-ML space. Ability to fire fight, propose fix, guide the team towards day-to-day issues in production. Ability to train partner Data Science teams on frameworks and platform. Flexible with time and shift to support the project requirements. It doesn’t include any night shift. This position doesn’t include any L1 or L2 (first line of support) responsibility. Requirements* Education* Graduation / Post Graduation: BE/B.Tech/MCA/MTech Certifications If Any: Azure, AWS, GCP, Data Bricks Experience Range* 9+ Years Foundational Skills* Platform Design & Deployment: Design and deploy a comprehensive data science tech stack on OpenShift (or other Kubernetes distributions), including support for Jupyter notebooks, model training pipelines, inference services, and internal APIs. Cloud-Agnostic Architecture: Proven ability to build a cloud-agnostic container platform capable of seamless migration from on-prem OpenShift to cloud-native Kubernetes on AWS, Azure, or GCP. Container Platform Management: Expertise in configuring and managing multi-tenant namespaces, RBAC, network policies, and resource quotas within Kubernetes/OpenShift environments. API Gateway & Security: Hands-on experience with API gateway technologies like Apache APISIX (or similar tools) for managing and securing API traffic, including JWT/OAuth2-based authentication. MLOps Toolchain Support: Experience deploying and maintaining critical MLOps toolchains such as MLflow, Kubeflow, model registries, and feature stores. CI/CD & GitOps: Strong integration experience with GitOps and CI/CD tools (e.g., ArgoCD, Jenkins, GitHub Actions) for automating ML model and infrastructure deployment workflows. Microservices Deployment: Ability to deploy and maintain containerized microservices using Python frameworks (FastAPI, Flask) or Node.js to serve ML APIs. Observability: Ensure comprehensive observability across platform components using industry-standard tools like Prometheus, Grafana, and EFK/ELK stacks. Infrastructure as Code (IaC): Proficiency in automating platform provisioning and configuration using Infrastructure as Code tools (Terraform, Ansible, or Helm). Policy & Governance: Expertise with Open Policy Agent (OPA) or similar policy-as-code frameworks for implementing and enforcing robust governance policies. Desired Skills* Lead the design, development, and implementation of scalable, high-performance applications using Python/Java/Scala. Apply expertise in Machine Learning (ML) to build predictive models, enhance decision-making capabilities, and drive business insights. Collaborate with cross-functional teams to design, implement, and optimize cloud-based architectures on AWS and Azure. Work with large-scale distributed technologies like Apache Kafka, Apache Spark, and Apache Storm to ensure seamless data processing and messaging at scale. Provide expertise in Java multi-threading, concurrency, and other advanced Java concepts to ensure the development of high-performance, thread-safe, and optimized applications. Architect and build data lakes and data pipelines for large-scale data ingestion, processing, and analytics. Ensure integration of complex systems and applications across various platforms while adhering to best practices in coding, testing, and deployment. Collaborate closely with stakeholders to understand business requirements and translate them into technical specifications. Manage technical risk and work on performance tuning, scalability, and optimization of systems. Provide leadership to junior team members, offering guidance and mentorship to help develop their technical skills. Effective communication, Strong stakeholder engagement skills, Proven ability in leading and mentoring a team of software engineers in a dynamic environment. Security Architecture: Understanding of zero-trust security architecture and secure API design patterns. Model Serving Frameworks: Knowledge of specialized model serving frameworks like Triton Inference Server. Vector Databases: Familiarity with Vector databases (e.g., Redis, Qdrant) and embedding stores. Data Lineage & Metadata: Exposure to data lineage and metadata management using tools like DataHub or OpenMetadata. Work Timings* 11:30 AM to 8:30 PM IST Job Location* Chennai

Posted 1 week ago

Apply

0 years

0 Lacs

Kochi, Kerala, India

On-site

We’re looking for a PHP Developer with a strong backend foundation and familiarity with UI and DevOps workflows. In this hybrid role, you'll build scalable web applications and contribute across the full development lifecycle—from requirements to deployment. You will be: Developing and maintaining PHP 8 applications using clean, object-oriented code Designing and implementing business logic, APIs, and database interactions Contributing to sprint planning, estimations, and code reviews Collaborating with UI/UX and DevOps teams to ensure smooth delivery Owning the end-to-end development of custom web projects You have in-depth experience in: PHP frameworks like Laravel, Symfony,etc. RDBMS systems such as MySQL and PostgreSQL. HTML, CSS, JavaScript for basic frontend collaboration. Version control using Git and containerization via Docker. You add value with exposure to: Cloud platforms: AWS, Azure, Google Cloud. CI/CD tools: Bitbucket Pipelines, AWS CodePipeline, Jenkins. Testing tools: PHPUnit, PEST. Search technologies: ElasticSearch, Algolia, Apache Solr. Frontend Frameworks: Angular, React, Vue. Basic scripting (Bash or Python) for task automation. Why choose LiteBreeze: Complex customized team projects and the opportunity to lead them! Work on projects from North European clients Excellent, clear career growth opportunities Opportunity to implement new ideas and technologies Free technical certifications like AWS. Opportunity to learn other backend technologies like Go and Node.js. Great place to work certified -– three years in a row. Join us to work on cutting-edge, customized web projects for North European clients with clear growth paths and opportunities to expand your technical skills.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Requirements Description and Requirements Position Summary Resource is responsible for assisting MetLife Docker Container support of Application Development Teams. In this position the resource will be supporting MetLife applications in an operational role performing on boarding applications, troubleshooting infra and Applications Container issues. Automate any of the manual build process using CI/CD pipeline. Job Responsibilities Development and maintenance in operational condition of OpenShift, Kubernetes Orchestration container platforms Experience in workload migration from Docker to OpenShift platform Manage the container platform ecosystem (installation, upgrade, patching, monitoring) Check and apply critical patches in OpenShift/Kubernetes Troubleshoot issues in OpenShift Clusters Experience in OpenShift implementation, administration and support Working experience in OpenShift and Docker/K8s Knowledge of CI/CD methodology and tooling (Jenkins, Harness) Experience with system configuration tools including Ansible, Chef Cluster maintenance and administration experience on OpenShift and Kubernetes Strong Knowledge & Experience in RHEL Linux Manage OpenShift Management Components and Tenants Participates as part of a technical team responsible for the overall support and management of the OpenShift Container Platform. Learn new technologies based on demand. Willing to work in rotational shifts Good Communication skill with the ability to communicate clearly and effectively Knowledge, Skills And Abilities Education Bachelor's degree in computer science, Information Systems, or related field Experience 7+ years of total experience and at least 4+ years of experience in development and maintenance in operational condition of OpenShift, Kubernetes Orchestration container platforms Experience in installation, upgrade, patching, monitoring of container platform ecosystem Experience in workload migration from Docker to OpenShift platform. Good knowledge of CI/CD methodology and tooling (Jenkins, Harness) Linux Administration Software Defined Networking (Fundamentals) Container Runtimes (Podman / Docker), Kubernetes (OpenShift) / Swarm Orchestration, GoLang framework and Microservices Architecture Knowledge and usage of Observability tools (i.e. Elastic, Grafana, Prometheus, OTEL collectors, Splunk ) Apache Administration Automation Platforms: Specifically, Ansible (roles / collections) SAFe DevOps Scaled Agile Methodology Scripting: Python, Bash Serialization Language: YAML, JSON Knowledge and usage of CI/CD Tools (i.e.: AzDO, ArgoCD) Reliability Mgmt. / Troubleshooting Collaboration & Communication SkillsContinuous Integration / Continuous Delivery (CI/CD) Experience in creating change tickets and working on tasks in Service Now Java Mgmt. (JMX)/ NodeJS management About MetLife Other Requirements (licenses, certifications, specialized training – if required) Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us!

Posted 1 week ago

Apply

0.0 - 1.0 years

0 - 1 Lacs

Noida

Work from Office

**ENGLISH COMMUNICATION MANDATORY** Technical Skills Required: 6 months to 1-year hands-on experience as a Linux Admin Experience with Linux servers (RHEL) in virtualized environments Scripting knowledge (Bash, Shell, etc.) Installing, configuring & maintaining services (Bind, Apache, Oracle, NGINX, etc.) Familiarity with load balancing, firewalls & infrastructure tools (iptables, HAProxy, IPVS) Knowledge of virtualization (vSphere), networking (servers/switches), and frameworks like Apache Tomcat, Oracle Java, PHP Experience with MySQL and infrastructure performance tuning Monitoring, automation, and configuration management expertise Process & Soft Skills: Willing to work in shifts (6-day work week) Strong communication skills (English & Hindi) Good documentation and reporting skills High attention to detail Team player shares knowledge, supports collaboration Professional conduct when dealing with clients or at client sites

Posted 1 week ago

Apply

5.0 - 7.0 years

0 Lacs

Vadodara, Gujarat, India

On-site

We’re reinventing the market research industry. Let’s reinvent it together. At Numerator, we believe tomorrow’s success starts with today’s market intelligence. We empower the world’s leading brands and retailers with unmatched insights into consumer behavior and the influencers that drive it. We are seeking a highly skilled Senior Data Engineer with extensive experience in designing, building, and optimizing high-volume data pipelines. The ideal candidate will have strong expertise in Python, Databricks on Azure Cloud services, DevOps, and CI/CD tools, along with a solid understanding of AI/ML techniques and big data processing frameworks like Apache Spark and PySpark. Responsibilities Adhere to coding and Numerator technology standards Build suitable automation test suites within Azure DevOps Maintain and update automation test suites as required Carry out manual testing, load testing, exploratory testing as required Work closely with Business Analysts and Senior Developers to consistently achieve sprint goals Assist in estimation of sprint-by-sprint stories and tasks Pro-actively take a responsible approach to product delivery What You'll Bring to Numerator Requirements 5-7 years of experience in data engineering roles Good Python skills Experience working with Microsoft Azure Cloud Experience in Agile methodologies (Scrum/Kanban) Experience with Apache Spark, PySpark, Databricks Experience working with Devops pipeline, preferably Azure DevOps Preferred Qualifications Bachelor's or master's degree in computer science, Information Technology, Data Science, or a related field. Experience working in a support focused role Certification in relevant Data Engineering discipline or related fields.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Kochi, Kerala, India

On-site

This role is for one of Weekday's clients Location: Chennai, Pune, Kochi JobType: full-time Requirements We are looking for a skilled and versatile JAVA FSD AWS Developer to join our client's Agile/SAFe development teams. In this role, you will participate in the design, development, integration, and deployment of enterprise-grade applications built on both modern cloud-native architectures (AWS) . You will ensure high-quality, testable, secure, and compliant code while collaborating in a fast-paced Agile setup Key Responsibilities: Agile Participation & Code Quality Active involvement in Scrum and SAFe team events, including planning, daily stand-ups, reviews, and retrospectives Create and validate testable features, ensuring coverage of both functional and non-functional requirements Deliver high-quality code through practices like Pair Programming and Test-Driven Development (TDD) Maintain operability, deployability, and integration readiness of application increments Ensure full compliance with internal frameworks such as PITT and established security protocols (SAST, DAST). Development & Integration Develop software solutions using a diverse tech stack: TypeScript, Java, SQL, Python, COBOL, Shell scripting Spring Boot, Angular, Node.js, Hibernate Work across multiple environments and technologies including Linux, Apache, Tomcat, Elasticsearch, IBM DB2 Build and maintain web applications, backend services, and APIs using modern and legacy technologies. AWS & Cloud Infrastructure Hands-on development and deployment with AWS services,EKS, ECR, IAM, SQS, SES, S3, CloudWatch Develop Infrastructure as Code using Terraform Ensure system reliability, monitoring, and traceability using tools like Splunk, UXMon, and AWS CloudWatch. Systems & Batch Integration Work with Kafka, particularly Streamzilla Kafka from PAG, for high-throughput messaging Design and consume both REST and SOAP APIs for integration with third-party and internal systems Manage and automate batch job scheduling via IBM Tivoli Workload Scheduler (TWS/OPC) and HostJobs Required Skills & Experience: 5+ years of experience in full stack development, DevOps, and mainframe integration Strong programming experience in: Languages: TypeScript, Java, Python, COBOL, Shell scripting Frameworks & Tools: Angular, Spring Boot, Hibernate, Node.js Databases: SQL, IBM DB2, Elasticsearch Proficient in AWS Cloud Services including container orchestration, IAM, S3, CloudWatch, SES, SQS, and Terraform Strong understanding of API development and integration (REST & SOAP) Experience in secure software development using SAST/DAST, TDD, and compliance frameworks (e.g., PITT) Familiarity with Kafka messaging systems, particularly Streamzilla Kafka Monitoring and observability experience using tools like Splunk, UXMon, or equivalents Preferred Qualifications: Experience with PCSS Toolbox or similar enterprise tooling Prior exposure to highly regulated industries (e.g., automotive, banking, insurance) Bachelor's or Master's degree in Computer Science, Information Technology, or related fields Certifications in AWS or DevOps tools are a plus

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Position Overview: We are seeking a talented Data Engineer with expertise in Apache Spark, Python / Java and distributed systems. The ideal candidate will be skilled in creating and managing data pipelines using AWS. Key Responsibilities: Design, develop, and implement data pipelines for ingesting, transforming, and loading data at scale. Utilise Apache Spark for data processing and analysis. Utilise AWS services (S3, Redshift, EMR, Glue) to build and manage efficient data pipelines. Optimise data pipelines for performance and scalability, considering factors like partitioning, bucketing, and caching. Write efficient and maintainable Python code. Implement and manage distributed systems for data processing. Collaborate with cross-functional teams to understand data requirements and deliver optimal solutions. Ensure data quality and integrity throughout the data lifecycle. Qualifications: Proven experience with Apache Spark and Python / Java. Strong knowledge of distributed systems. Proficiency in creating data pipelines with AWS. Excellent problem-solving and analytical skills. Ability to work independently and as part of a team. Bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent experience). Proven experience in designing and developing data pipelines using Apache Spark and Python. Experience with distributed systems concepts (Hadoop, YARN) is a plus. In-depth knowledge of AWS cloud services for data engineering (S3, Redshift, EMR, Glue). Familiarity with data warehousing concepts (data modeling, ETL) is preferred. Strong programming skills in Python (Pandas, NumPy, Scikit-learn are a plus). Experience with data pipeline orchestration tools (Airflow, Luigi) is a plus. Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Preferred Qualifications: Experience with additional AWS services (e.g., AWS Glue, AWS Lambda, Amazon Redshift). Familiarity with data warehousing and ETL processes. Knowledge of data governance and best practices. Have a good understanding of the oops concept. Hands-on experience with SQL database design Experience with Python, SQL, and data visualization/exploration tools

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

This role is for one of Weekday's clients Location: Chennai, Pune, Kochi JobType: full-time Requirements We are looking for a skilled and versatile JAVA FSD AWS Developer to join our client's Agile/SAFe development teams. In this role, you will participate in the design, development, integration, and deployment of enterprise-grade applications built on both modern cloud-native architectures (AWS) . You will ensure high-quality, testable, secure, and compliant code while collaborating in a fast-paced Agile setup Key Responsibilities: Agile Participation & Code Quality Active involvement in Scrum and SAFe team events, including planning, daily stand-ups, reviews, and retrospectives Create and validate testable features, ensuring coverage of both functional and non-functional requirements Deliver high-quality code through practices like Pair Programming and Test-Driven Development (TDD) Maintain operability, deployability, and integration readiness of application increments Ensure full compliance with internal frameworks such as PITT and established security protocols (SAST, DAST). Development & Integration Develop software solutions using a diverse tech stack: TypeScript, Java, SQL, Python, COBOL, Shell scripting Spring Boot, Angular, Node.js, Hibernate Work across multiple environments and technologies including Linux, Apache, Tomcat, Elasticsearch, IBM DB2 Build and maintain web applications, backend services, and APIs using modern and legacy technologies. AWS & Cloud Infrastructure Hands-on development and deployment with AWS services,EKS, ECR, IAM, SQS, SES, S3, CloudWatch Develop Infrastructure as Code using Terraform Ensure system reliability, monitoring, and traceability using tools like Splunk, UXMon, and AWS CloudWatch. Systems & Batch Integration Work with Kafka, particularly Streamzilla Kafka from PAG, for high-throughput messaging Design and consume both REST and SOAP APIs for integration with third-party and internal systems Manage and automate batch job scheduling via IBM Tivoli Workload Scheduler (TWS/OPC) and HostJobs Required Skills & Experience: 5+ years of experience in full stack development, DevOps, and mainframe integration Strong programming experience in: Languages: TypeScript, Java, Python, COBOL, Shell scripting Frameworks & Tools: Angular, Spring Boot, Hibernate, Node.js Databases: SQL, IBM DB2, Elasticsearch Proficient in AWS Cloud Services including container orchestration, IAM, S3, CloudWatch, SES, SQS, and Terraform Strong understanding of API development and integration (REST & SOAP) Experience in secure software development using SAST/DAST, TDD, and compliance frameworks (e.g., PITT) Familiarity with Kafka messaging systems, particularly Streamzilla Kafka Monitoring and observability experience using tools like Splunk, UXMon, or equivalents Preferred Qualifications: Experience with PCSS Toolbox or similar enterprise tooling Prior exposure to highly regulated industries (e.g., automotive, banking, insurance) Bachelor's or Master's degree in Computer Science, Information Technology, or related fields Certifications in AWS or DevOps tools are a plus

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies