Home
Jobs

3773 Scala Jobs - Page 7

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Linkedin logo

Exp:5+yrs NP: Imm-15 days Rounds: 3 Rounds (Virtual) Mandate Skills: Apache spark, hive, Hadoop, spark, scala, Databricks Job Description The Role Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights. Constructing infrastructure for efficient ETL processes from various sources and storage systems. Leading the implementation of algorithms and prototypes to transform raw data into useful information. Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations. Creating innovative data validation methods and data analysis tools. Ensuring compliance with data governance and security policies. Interpreting data trends and patterns to establish operational alerts. Developing analytical tools, programs, and reporting mechanisms Conducting complex data analysis and presenting results effectively. Preparing data for prescriptive and predictive modeling. Continuously exploring opportunities to enhance data quality and reliability. Applying strong programming and problem-solving skills to develop scalable solutions. Requirements Experience in the Big Data technologies (Hadoop, Spark, Nifi, Impala) 5+ years of hands-on experience designing, building, deploying, testing, maintaining, monitoring, and owning scalable, resilient, and distributed data pipelines. High proficiency in Scala/Java and Spark for applied large-scale data processing. Expertise with big data technologies, including Spark, Data Lake, and Hive. Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Location: Hyderabad Budget: 3.5x Notice : Immediate joiners Requirements : • BS degree in computer science, computer engineering or equivalent • 5-9 years of experience delivering enterprise software solutions • Familiar with Spark, Scala, Python, AWS Cloud technologies • 2+ years of experience across multiple Hadoop / Spark technologies such as Hadoop, MapReduce, HDFS, HBase, Hive, Flume, Sqoop, Kafka, Scala • Flair for data, schema, data model, how to bring efficiency in big data related life cycle. • Experience with Agile Development methodologies. • Experience with data ingestion and transformation • Have understanding for secure application development methodologies. • Experience in with Airflow and Python will be preferred. • Understanding of automated QA needs related to Big data technology. • Strong object-oriented design and analysis skills • Excellent written and verbal communication skills Responsibilities • Utilize your software engineering skills including Spark, Python, Scala to Analyze disparate, complex systems and collaboratively design new products and services • Integrate new data sources and tools • Implement scalable and reliable distributed data replication strategies • Collaborate with other teams to design and develop and deploy data tools that support both operations and product use cases • Perform analysis of large data sets using components from the Hadoop ecosystem • Own product features from the development, testing through to production deployment • Evaluate big data technologies and prototype solutions to improve our data processing architecture • Automate different pipelines Show more Show less

Posted 1 day ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About Client: Our Client is a global IT services company headquartered in Southborough, Massachusetts, USA. Founded in 1996, with a revenue of $1.8B, with 35,000+ associates worldwide, specializes in digital engineering, and IT services company helping clients modernize their technology infrastructure, adopt cloud and AI solutions, and accelerate innovation. It partners with major firms in banking, healthcare, telecom, and media. Our Client is known for combining deep industry expertise with agile development practices, enabling scalable and cost-effective digital transformation. The company operates in over 50 locations across more than 25 countries, has delivery centers in Asia, Europe, and North America and is backed by Baring Private Equity Asia. Job Title :Data Engineer Key Skills :Python , ETL, Snowflake, Apache Airflow Job Locations : Pan India. Experience : 6-7 Education Qualification : Any Graduation. Work Mode : Hybrid. Employment Type : Contract. Notice Period : Immediate Job Description: 6 to 10 years of experience in data engineering roles with a focus on building scalable data solutions. Proficiency in Python for ETL, data manipulation, and scripting. Hands-on experience with Snowflake or equivalent cloud-based data warehouses. Strong knowledge of orchestration tools such as Apache Airflow or similar. Expertise in implementing and managing messaging queues like Kafka , AWS SQS , or similar. Demonstrated ability to build and optimize data pipelines at scale, processing terabytes of data. Experience in data modeling, data warehousing, and database design. Proficiency in working with cloud platforms like AWS, Azure, or GCP. Strong understanding of CI/CD pipelines for data engineering workflows. Experience working in an Agile development environment , collaborating with cross-functional teams. Preferred Skills: Familiarity with other programming languages like Scala or Java for data engineering tasks. Knowledge of containerization and orchestration technologies (Docker, Kubernetes). Experience with stream processing frameworks like Apache Flink . Experience with Apache Iceberg for data lake optimization and management. Exposure to machine learning workflows and integration with data pipelines. Soft Skills: Strong problem-solving skills with a passion for solving complex data challenges. Excellent communication and collaboration skills to work with cross-functional teams. Ability to thrive in a fast-paced, innovative environment. Show more Show less

Posted 1 day ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Role: Java Developer - Software Engineer Experience: 4-9 Years Location: Chennai (HYBRID) Interview: F2F Mandatory: Java Spring Boot Microservice -React Js -AWS Cloud- DevOps- Node(Added Advantage) Job Description: Overall 4+ years of experience in Java Development Projects 3+Years of development experience in development with React 2+Years Of experience in AWS Cloud, Devops. Microservices development using Spring Boot Technical StackCore Java, Java, J2EE, Spring, MongoDB, GKE, Terraform, GitHub, GCP Developer, Kubernetes, Scala, Kafka Technical ToolsConfluence/Jira/Bitbucket or Git, CI / CD (Maven, Git, Jenkins), Eclipse or IntelliJ IDEA Experience in event-driven architectures (CQRS and SAGA patterns). Experience in Design patterns Build Tools (Gulp, Webpack), Jenkins, Docker, Automation, Bash, Redis, Elasticsearch, Kibana Technical Stack (UI)JavaScript, React JS, CSS/SCSS, HTML5, Git+ Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

5 years of experience as a Data Engineer or a similar role. Bachelor’s degree in Computer Science, Data Engineering, Information Technology, or a related field. Strong knowledge of data engineering tools and technologies (e.g. SQL, ETL, data warehousing). Experience with data pipeline frameworks and data processing platforms (e.g. Apache Kafka, Apache Spark). Proficiency in programming languages such as Python, Java, or Scala. Experience with cloud platforms (e.g. AWS, Google Cloud Platform, Azure). Knowledge of data modeling, database design, and data governance. Mongo DB Is Must Show more Show less

Posted 1 day ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Introduction At IBM, work is more than a job - it's a calling: To build. To design. To code. To consult. To think along with clients and sell. To make markets. To invent. To collaborate. Not just to do something better, but to attempt things you've never thought possible. Are you ready to lead in this new era of technology and solve some of the world's most challenging problems? If so, lets talk. Your Role And Responsibilities As a software developer you will be responsible for Design, develop, implement, automate, deploy, and operate enterprise quality cloud-native software using a microservice architecture through agile development practices. Perform design and implementation reviews for other developers on multiple projects. Regularly develop automation testing and maintenance of microservices. Handle the operations of services deployed on the cloud. Prepare, write, or review technical documentation, such as content to be included in the product documentation and training materials. Prepare and present technology architecture/design and demos at playback meetings to show progress and solicit team direction. Assist customers and other IBMers to effectively use our products and learn from their experiences. Identify and recommend improvements that can be made to continuously evolve and improve. Preferred Education Bachelor's Degree Required Technical And Professional Expertise 2 + Years of strong development experience in cloud native reactive applications in Java Expertise in OOPS /Design patterns Strong development experience with REST API based microservices development Working Knowledge in DB/SQL Working knowledge in Container technologies: Kubernetes, Docker Excellent verbal and written communication skills with the ability to present complex technical information Self-starter, organized, willing to learn and solve things on your own Ability to work effectively as part of a world-wide, agile development team Proven track record of being able to own development projects from design through implementation and delivery Preferred Technical And Professional Experience Development experience in Python , Scala , Spark/SparkMLib , UNIX shell scripting Agile Application Development & Scrum methodologies Experience with Public Cloud Services (AWS, Azure, IBM Cloud) Good debugging skills and troubleshooting be able to participate in Quality and Automation of the product as needed as Quality is very integral to the product Experience with DevOps practices Team-mindset: Willingness to collaborate and iterate Growth-mindset: Willingness to learn new technologies and processes Preferred Bachelor’s or Master’s degree in Computer Science/Information Technology , BCA , MCA. Show more Show less

Posted 1 day ago

Apply

50.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About Client :- Our client is a French multinational information technology (IT) services and consulting company, headquartered in Paris, France. Founded in 1967, It has been a leader in business transformation for over 50 years, leveraging technology to address a wide range of business needs, from strategy and design to managing operations. The company is committed to unleashing human energy through technology for an inclusive and sustainable future, helping organizations accelerate their transition to a digital and sustainable world. They provide a variety of services, including consulting, technology, professional, and outsourcing services. Job Details:- location : Hyderabad Mode Of Work : Hybrid Notice Period : Immediate Joiners Experience : 6-8 yrs Type Of Hire : Contract to Hire JOB DESCRIPTION: • Understanding of Spark core concepts like RDD’s, DataFrames, DataSets, SparkSQL and Spark Streaming. • Experience with Spark optimization techniques. • Deep knowledge of Delta Lake features like time travel, schema evolution, data partitioning. • Ability to design and implement data pipelines using Spark and Delta Lake as the data storage layer. • Proficiency in Python/Scala/Java for Spark development and integrate with ETL process. • Knowledge of data ingestion techniques from various sources (flat files, CSV, API, database) • Understanding of data quality best practices and data validation techniques. Other Skills: • Understanding of data warehouse concepts, data modelling techniques. • Expertise in Git for code management. • Familiarity with CI/CD pipelines and containerization technologies. • Nice to have experience using data integration tools like DataStage/Prophecy/Informatica/Ab Initio" Show more Show less

Posted 1 day ago

Apply

7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

TEKsystems is seeking a Senior AWS + Data Engineer to join our dynamic team. The ideal candidate should have expertise Data engineer + Hadoop + Scala/Python with AWS services. This role involves designing, developing, and maintaining scalable and reliable software solutions. Job Title: Data Engineer – Spark/Scala (Batch Processing) Location: Manyata- Hybrid Experience: 7+yrs Type: Full-Time Mandatory Skills: 7-10 years’ experience in design, architecture or development in Analytics and Data Warehousing. Experience in building end-to-end solutions with the Big data platform, Spark or scala programming. 5 years of Solid experience in ETL pipeline building with spark or sclala programming framework with knowledge in developing UNIX Shell Script, Oracle SQL/ PL-SQL. Experience in Big data platform for ETL development with AWS cloud platform. Proficiency in AWS cloud services, specifically EC2, S3, Lambda, Athena, Kinesis, Redshift, Glue , EMR, DynamoDB, IAM, Secret Manager, Step functions, SQS, SNS, Cloud Watch. Excellent skills in Python-based framework development are mandatory. Should have experience with Oracle SQL database programming, SQL performance tuning, and relational model analysis. Extensive experience with Teradata data warehouses and Cloudera Hadoop. Proficient across Enterprise Analytics/BI/DW/ETL technologies such as Teradata Control Framework, Tableau, OBIEE, SAS, Apache Spark, Hive Analytics & BI Architecture appreciation and broad experience across all technology disciplines. Experience in working within a Data Delivery Life Cycle framework & Agile Methodology. Extensive experience in large enterprise environments handling large volume of datasets with High SLAs Good knowledge in developing UNIX scripts, Oracle SQL/PL-SQL, and Autosys JIL Scripts. Well versed in AI Powered Engineering tools like Cline, GitHub Copilo Please send the resumes to nvaseemuddin@teksystems.com or kebhat@teksystems.com Show more Show less

Posted 1 day ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana

On-site

Indeed logo

Engineer III, Database Engineering Gurgaon, India; Hyderabad, India Information Technology 316332 Job Description About The Role: Grade Level (for internal use): 10 Role: As a Senior Database Engineer, you will work on multiple datasets that will enable S&P CapitalIQ Pro to serve-up value-added Ratings, Research and related information to the Institutional clients. The Team: Our team is responsible for the gathering data from multiple sources spread across the globe using different mechanism (ETL/GG/SQL Rep/Informatica/Data Pipeline) and convert them to a common format which can be used by Client facing UI tools and other Data providing Applications. This application is the backbone of many of S&P applications and is critical to our client needs. You will get to work on wide range of technologies and tools like Oracle/SQL/.Net/Informatica/Kafka/Sonic. You will have the opportunity every day to work with people from a wide variety of backgrounds and will be able to develop a close team dynamic with coworkers from around the globe. We craft strategic implementations by using the broader capacity of the data and product. Do you want to be part of a team that executes cross-business solutions within S&P Global? Impact: Our Team is responsible to deliver essential and business critical data with applied intelligence to power the market of the future. This enables our customer to make decisions with conviction. Contribute significantly to the growth of the firm by- Developing innovative functionality in existing and new products Supporting and maintaining high revenue productionized products Achieve the above intelligently and economically using best practices Career: This is the place to hone your existing Database skills while having the chance to become exposed to fresh technologies. As an experienced member of the team, you will have the opportunity to mentor and coach developers who have recently graduated and collaborate with developers, business analysts and product managers who are experts in their domain. Your skills: You should be able to demonstrate that you have an outstanding knowledge and hands-on experience in the below areas: Complete SDLC: architecture, design, development and support of tech solutions Play a key role in the development team to build high-quality, high-performance, scalable code Engineer components, and common services based on standard corporate development models, languages and tools Produce technical design documents and conduct technical walkthroughs Collaborate effectively with technical and non-technical stakeholders Be part of a culture to continuously improve the technical design and code base Document and demonstrate solutions using technical design docs, diagrams and stubbed code Our Hiring Manager says: I’m looking for a person that gets excited about technology and motivated by seeing how our individual contribution and team work to the world class web products affect the workflow of thousands of clients resulting in revenue for the company. Qualifications Required: Bachelor’s degree in computer science, Information Systems or Engineering. 7+ years of experience on Transactional Databases like SQL server, Oracle, PostgreSQL and other NoSQL databases like Amazon DynamoDB, MongoDB Strong Database development skills on SQL Server, Oracle Strong knowledge of Database architecture, Data Modeling and Data warehouse. Knowledge on object-oriented design, and design patterns. Familiar with various design and architectural patterns Strong development experience with Microsoft SQL Server Experience in cloud native development and AWS is a big plus Experience with Kafka/Sonic Broker messaging systems Nice to have: Experience in developing data pipelines using Java or C# is a significant advantage. Strong knowledge around ETL Tools – Informatica, SSIS Exposure with Informatica is an advantage. Familiarity with Agile and Scrum models Working Knowledge of VSTS. Working knowledge of AWS cloud is an added advantage. Understanding of fundamental design principles for building a scalable system. Understanding of financial markets and asset classes like Equity, Commodity, Fixed Income, Options, Index/Benchmarks is desirable. Additionally, experience with Scala, Python and Spark applications is a plus. About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 316332 Posted On: 2025-06-16 Location: Gurgaon, Haryana, India

Posted 1 day ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana

On-site

Indeed logo

Location Gurugram, Haryana, India This job is associated with 2 categories Job Id GGN00001744 Information Technology Job Type Full-Time Posted Date 06/16/2025 Achieving our goals starts with supporting yours. Grow your career, access top-tier health and wellness benefits, build lasting connections with your team and our customers, and travel the world using our extensive route network. Come join us to create what’s next. Let’s define tomorrow, together. Description United's Digital Technology team designs, develops, and maintains massively scaling technology solutions brought to life with innovative architectures, data analytics, and digital solutions. Our Values: At United Airlines, we believe that inclusion propels innovation and is the foundation of all that we do. Our Shared Purpose: "Connecting people. Uniting the world." drives us to be the best airline for our employees, customers, and everyone we serve, and we can only do that with a truly diverse and inclusive workforce. Our team spans the globe and is made up of diverse individuals all working together with cutting-edge technology to build the best airline in the history of aviation. With multiple employee-run "Business Resource Group" communities and world-class benefits like health insurance, parental leave, and space available travel, United is truly a one-of-a-kind place to work that will make you feel welcome and accepted. Come join our team and help us make a positive impact on the world. Job overview and responsibilities United Airlines is seeking talented people to join the Data Engineering team. Data Engineering organization is responsible for driving data driven insights & innovation to support the data needs for commercial and operational projects with a digital focus. You will work as a Senior Engineer - Machine Learning and collaborate with data scientists and data engineers to: Build high-performance, cloud-native machine learning infrastructure and services to enable rapid innovation across United Build complex data ingestion and transformation pipelines for batch and real-time data Support large scale model training and serving pipelines in distributed and scalable environment This position is offered on local terms and conditions within United’s wholly owned subsidiary United Airlines Business Services Pvt. Ltd. Expatriate assignments and sponsorship for employment visas, even on a time-limited visa status, will not be awarded This position is offered on local terms and conditions. Expatriate assignments and sponsorship for employment visas, even on a time-limited visa status, will not be awarded. United Airlines is an equal opportunity employer. United Airlines recruits, employs, trains, compensates, and promotes regardless of race, religion, color, national origin, gender identity, sexual orientation, physical ability, age, veteran status, and other protected status as required by applicable law. Qualifications Required BS/BA, in Advanced Computer Science, Data Science, Engineering or related discipline or Mathematics experience required Strong software engineering experience with Python and at least one additional language such as Go, Java, or C/C++ Familiarity with ML methodologies and frameworks (e.g., PyTorch, Tensorflow) and preferably building and deploying production ML pipelines Experience developing cloud-native solutions with Docker and Kubernetes Cloud-native DevOps, CI/CD experience using tools such as Jenkins or AWS CodePipeline; preferably experience with GitOps using tools such as ArgoCD, Flux, or Jenkins X Experience building real-time and event-driven stream processing pipelines with technologies such as Kafka, Flink, and Spark Experience setting up and optimizing data stores (RDBMS/NoSQL) for production use in the ML app context Strong desire to stay aligned with the latest developments in cloud-native and ML ops/engineering and to experiment with and learn new technologies Experience 3 + years of software engineering experience with languages such as Python, Go, Java, Scala, Kotlin, or C/C++ 2 + years of experience working in cloud environments (AWS preferred) 2 + years of experience with Big Data technologies such as Spark, Flink 2 + years of experience with cloud-native DevOps, CI/CD At least one year of experience with Docker and Kubernetes in a production environment Must be legally authorized to work in India for any employer without sponsorship Must be fluent in English and Hindi (written and spoken) Successful completion of interview required to meet job qualification Reliable, punctual attendance is an essential function of the position Preferred Masters in computer science or related STEM field

Posted 1 day ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana

On-site

Indeed logo

Location Gurugram, Haryana, India This job is associated with 2 categories Job Id GGN00001926 Information Technology Job Type Full-Time Posted Date 06/16/2025 Achieving our goals starts with supporting yours. Grow your career, access top-tier health and wellness benefits, build lasting connections with your team and our customers, and travel the world using our extensive route network. Come join us to create what’s next. Let’s define tomorrow, together. Description United's Digital Technology team designs, develops, and maintains massively scaling technology solutions brought to life with innovative architectures, data analytics, and digital solutions. Find your future at United! We’re reinventing what our industry looks like, and what an airline can be – from the planes we fly to the people who fly them. When you join us, you’re joining a global team of 100,000+ connected by a shared passion with a wide spectrum of experience and skills to lead the way forward. Achieving our ambitions starts with supporting yours. Evolve your career and find your next opportunity. Get the care you need with industry-leading health plans and best-in-class programs to support your emotional, physical, and financial wellness. Expand your horizons with travel across the world’s biggest route network. Connect outside your team through employee-led Business Resource Groups. Create what’s next with us. Let’s define tomorrow together. Job overview and responsibilities United Airlines is seeking talented people to join the Data and Machine Learning Engineering team. The organization is responsible for leading data driven insights & innovation to support the Machine Learning needs for commercial and operational projects with a digital focus. This role will frequently collaborate with data scientists and data engineers. This role will design and implement key components of the Machine Learning Platform, business use cases, and establish processes and best practices. This position is offered on local terms and conditions. Expatriate assignments and sponsorship for employment visas, even on a time-limited visa status, will not be awarded. United Airlines is an equal opportunity employer. United Airlines recruits, employs, trains, compensates, and promotes regardless of race, religion, color, national origin, gender identity, sexual orientation, physical ability, age, veteran status, and other protected status as required by applicable law. Build high-performance, cloud-native machine learning infrastructure and services to enable rapid innovation across United. Set up containers and Serverless platform with cloud infrastructure. You will design and develop tools and apps to enable ML automation using AWS ecosystem. Build data pipelines to enable ML models for batch and real-time data. Hands on development expertise of Spark and Flink for both real time and batch applications. Support large scale model training and serving pipelines in distributed and scalable environment. Stay aligned with the latest developments in cloud-native and ML ops/engineering and to experiment with and learn new technologies – NumPy, data science packages like sci-kit, microservices architecture. Optimize, fine-tune generative AI/LLM models to improve performance and accuracy and deploy them. Evaluate the performance of LLM models, Implement LLMOps processes to manage the end-to-ed lifecycle of large language models. Qualifications Required Bachelor's degree in compute Science, Data Science, Generative AI, Engineering or related discipline or Mathematics experience required 6+ years of software engineering experience with languages such as Python, Go, Java, Scala, Kotlin, or C/C++. 5+ years of experience in machine learning, deep learning, and natural language processing. 3+ years of experience working in cloud environments (AWS preferred) - Kubernetes, Dockers, ECS and EKS. 2+ years of experience with Big Data technologies such as Spark, Flink and SQLprogramming. 3+ years of experience with cloud-native DevOps, CI/CD. 1+ years of experience with Generative AI/LLMs. Strong software engineering experience with Python and at least one additional language such as Go, Java, or C/C++ Familiarity with data science methodologies and frameworks (e.g., PyTorch, Tensorflow) and preferably building and deploying production ML pipelines Experience in ML model life cycle development experience and prefer experience to common algorithms Preferred Master's degree Computer Science or related STEM field

Posted 1 day ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) 🔧 Primary Skills Python Spark (PySpark) SQL Delta Lake 📌 Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less

Posted 1 day ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) 🔧 Primary Skills Python Spark (PySpark) SQL Delta Lake 📌 Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less

Posted 1 day ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) 🔧 Primary Skills Python Spark (PySpark) SQL Delta Lake 📌 Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less

Posted 1 day ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) 🔧 Primary Skills Python Spark (PySpark) SQL Delta Lake 📌 Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less

Posted 1 day ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) 🔧 Primary Skills Python Spark (PySpark) SQL Delta Lake 📌 Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less

Posted 1 day ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) 🔧 Primary Skills Python Spark (PySpark) SQL Delta Lake 📌 Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less

Posted 1 day ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) 🔧 Primary Skills Python Spark (PySpark) SQL Delta Lake 📌 Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less

Posted 1 day ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) 🔧 Primary Skills Python Spark (PySpark) SQL Delta Lake 📌 Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less

Posted 1 day ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) 🔧 Primary Skills Python Spark (PySpark) SQL Delta Lake 📌 Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less

Posted 1 day ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) 🔧 Primary Skills Python Spark (PySpark) SQL Delta Lake 📌 Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less

Posted 1 day ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) 🔧 Primary Skills Python Spark (PySpark) SQL Delta Lake 📌 Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less

Posted 1 day ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) 🔧 Primary Skills Python Spark (PySpark) SQL Delta Lake 📌 Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less

Posted 1 day ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) 🔧 Primary Skills Python Spark (PySpark) SQL Delta Lake 📌 Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less

Posted 1 day ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) 🔧 Primary Skills Python Spark (PySpark) SQL Delta Lake 📌 Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less

Posted 1 day ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies