Jobs
Interviews

6093 Scala Jobs - Page 33

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

6 - 8 Lacs

Thiruvananthapuram Taluk, India

On-site

Position- Data Engineer Experience- 3+ years Location : Trivandrum, Hybrid Salary : Upto 8 LPA Job Summary We are seeking a highly motivated and skilled Data Engineer with 3+ years of experience to join our growing data team. In this role, you will be instrumental in designing, building, and maintaining robust, scalable, and efficient data pipelines and infrastructure. You will work closely with data scientists, analysts, and other engineering teams to ensure data availability, quality, and accessibility for various analytical and machine learning initiatives. Key Responsibilities Design and Development: ○ Design, develop, and optimize scalable ETL/ELT pipelines to ingest, transform, and load data from diverse sources into data warehouses/lakes. ○ Implement data models and schemas that support analytical and reporting requirements. ○ Build and maintain robust data APIs for data consumption by various applications and services. Data Infrastructure: ○ Contribute to the architecture and evolution of our data platform, leveraging cloud services (AWS, Azure, GCP) or on-premise solutions. ○ Ensure data security, privacy, and compliance with relevant regulations. ○ Monitor data pipelines for performance, reliability, and data quality, implementing alerting and anomaly detection. Collaboration & Optimization: ○ Collaborate with data scientists, business analysts, and product managers to understand data requirements and translate them into technical solutions. ○ Optimize existing data processes for efficiency, cost-effectiveness, and performance. ○ Participate in code reviews, contribute to documentation, and uphold best practices in data engineering. Troubleshooting & Support: ○ Diagnose and resolve data-related issues, ensuring minimal disruption to data consumers. ○ Provide support and expertise to teams consuming data from the data platform. Required Qualifications Bachelor's degree in Computer Science, Engineering, or a related quantitative field. 3+ years of hands-on experience as a Data Engineer or in a similar role. Strong proficiency in at least one programming language commonly used for data engineering (e.g., Python, Java, Scala). Extensive experience with SQL and relational databases (e.g., PostgreSQL, MySQL, SQL Server). Proven experience with ETL/ELT tools and concepts. Experience with data warehousing concepts and technologies (e.g., Snowflake, Redshift, BigQuery, Azure Synapse, Data Bricks). Familiarity with cloud platforms (AWS, Azure, or GCP) and their data services (e.g., S3, EC2, Lambda, Glue, Data Factory, Blob Storage, BigQuery, Dataflow). Understanding of data modeling techniques (e.g., dimensional modeling, Kimball, Inmon). Experience with version control systems (e.g., Git). Excellent problem-solving, analytical, and communication skills. Preferred Qualifications Master's degree in a relevant field. Experience with Apache Spark (PySpark, Scala Spark) or other big data processing frameworks. Familiarity with NoSQL databases (e.g., MongoDB, Cassandra). Experience with data streaming technologies (e.g., Kafka, Kinesis). Knowledge of containerization technologies (e.g., Docker, Kubernetes). Experience with workflow orchestration tools (e.g., Apache Airflow, Azure Data Factory, AWS Step Functions). Understanding of DevOps principles as applied to data pipelines. Prior experience in Telecom is a plus. Skills: data streaming technologies (kafka, kinesis),azure,data modeling,apache spark,workflow orchestration tools (apache airflow, azure data factory, aws step functions),pipelines,apache,data engineering,kubernetes,cloud,programming languages (python, java, scala),docker,data apis,data warehousing,aws,version control systems (git),python,,cloud services (aws, azure, gcp),sql,nosql databases (mongodb, cassandra),etl/elt pipelines

Posted 2 weeks ago

Apply

0 years

2 - 7 Lacs

Hyderābād

On-site

Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Gurugram, Haryana, India; Bengaluru, Karnataka, India; Hyderabad, Telangana, India . Minimum qualifications: Bachelor's degree in Computer Science, Mathematics, a related technical field, or equivalent practical experience. Experience building Machine Learning or Data Science solutions. Experience writing software in Python, Scala, R, or similar. Experience with data structures, algorithms, and software design. Ability to travel up to 30% of the time. Preferred qualifications: Experience working with recommendation engines, data pipelines, or distributed machine learning, data analytics, data visualization techniques and software, and deep learning frameworks. Experience in software development, professional services, solution engineering, technical consulting, architecting and rolling out new technology and solution initiatives. Experience with core Data Science techniques. Knowledge of data warehousing concepts, including data warehouse technical architectures, infrastructure components, ETL/ELT and reporting/analytic tools and environments. Knowledge of cloud computing, including virtualization, hosted services, multi-tenant cloud infrastructures, storage systems, and content delivery networks. Excellent customer-facing communication and listening skills. About the job The Google Cloud Platform team helps customers transform and build what's next for their business — all with technology built in the cloud. Our products are developed for security, reliability and scalability, running the full stack from infrastructure to applications to devices and hardware. Our teams are dedicated to helping our customers — developers, small and large businesses, educational institutions and government agencies — see the benefits of our technology come to life. As part of an entrepreneurial team in this rapidly growing business, you will play a key role in understanding the needs of our customers and help shape the future of businesses of all sizes use technology to connect with customers, employees and partners. As a Cloud Engineer, you will play a key role in ensuring that customers have the best experience moving to the Google Cloud machine learning (ML) suite of products. You will design and implement machine learning solutions for customer use cases, leveraging core Google products. You will work with customers to identify opportunities to transform their business with machine learning, and will travel to customer sites to deploy solutions and deliver workshops designed to educate and empower customers to realize the full potential of Google Cloud. You will have access to Google’s technology to monitor application performance, debug and troubleshoot product code, and address customer and partner needs. In this role, you will lead the timely execution of adopting the Google Cloud Platform solutions to the customer’s requirements. Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Deliver effective big data and machine learning solutions and solve technical customer issues. Act as a technical advisor to Google’s customers. Identify new product features and feature gaps, provide guidance on existing product issues, and collaborate with Product Managers and Engineers to influence the roadmap of Google Cloud Platform. Deliver best practice recommendations, tutorials, blog articles, and technical presentations adapting to different levels of key business and technical stakeholders. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.

Posted 2 weeks ago

Apply

8.0 - 11.0 years

35 - 37 Lacs

Kolkata, Ahmedabad, Bengaluru

Work from Office

Dear Candidate, We are hiring a Data Analytics Engineer to design data models, build ETL pipelines, and deliver analytical solutions to support data-driven decisions. Key Responsibilities: Develop and maintain data pipelines for analytics and reporting. Design data warehouses or data lakes to support BI tools. Implement data quality, validation, and governance processes. Collaborate with business teams to translate requirements into datasets. Optimize query performance for large-scale analytics. Required Skills & Qualifications: Strong SQL and experience with data warehouse platforms (Snowflake, Redshift, BigQuery). Proficiency in Python or Scala for data processing. Knowledge of ETL tools (Airflow, Talend, dbt). Experience with BI tools (Tableau, Power BI, Looker) is a plus. Understanding of data modeling (star/snowflake schema, normalization). Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Reddy Delivery Manager Integra Technologies

Posted 2 weeks ago

Apply

15.0 years

0 Lacs

Gurgaon

On-site

Project Role : Software Development Lead Project Role Description : Develop and configure software systems either end-to-end or for a specific stage of product lifecycle. Apply knowledge of technologies, applications, methodologies, processes and tools to support a client, project or entity. Must have skills : Scala Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary: As a Software Development Lead, you will develop and configure software systems, either end-to-end or for specific stages of the product lifecycle. Your typical day will involve collaborating with various teams to ensure the successful implementation of software solutions, applying your knowledge of technologies and methodologies to support projects and clients effectively. You will engage in problem-solving and decision-making processes, ensuring that the software systems meet the required standards and specifications while fostering a collaborative environment among team members. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Facilitate knowledge sharing and mentoring within the team to enhance overall performance. - Monitor project progress and ensure alignment with project goals and timelines. Professional & Technical Skills: - Must To Have Skills: Proficiency in Scala. - Strong understanding of software development methodologies and best practices. - Experience with version control systems such as Git. - Familiarity with cloud platforms and services. - Ability to write clean, maintainable, and efficient code. Additional Information: - The candidate should have minimum 5 years of experience in Scala. - This position is based at our Gurugram office. - A 15 years full time education is required. 15 years full time education

Posted 2 weeks ago

Apply

8.0 years

30 - 38 Lacs

Gurgaon

Remote

Role: AWS Data Engineer Location: Gurugram Mode: Hybrid Type: Permanent Job Description: We are seeking a talented and motivated Data Engineer with requisite years of hands-on experience to join our growing data team. The ideal candidate will have experience working with large datasets, building data pipelines, and utilizing AWS public cloud services to support the design, development, and maintenance of scalable data architectures. This is an excellent opportunity for individuals who are passionate about data engineering and cloud technologies and want to make an impact in a dynamic and innovative environment. Key Responsibilities: Data Pipeline Development: Design, develop, and optimize end-to-end data pipelines for extracting, transforming, and loading (ETL) large volumes of data from diverse sources into data warehouses or lakes. Cloud Infrastructure Management: Implement and manage data processing and storage solutions in AWS (Amazon Web Services) using services like S3, Redshift, Lambda, Glue, Kinesis, and others. Data Modeling: Collaborate with data scientists, analysts, and business stakeholders to define data requirements and design optimal data models for reporting and analysis. Performance Tuning & Optimization: Identify bottlenecks and optimize query performance, pipeline processes, and cloud resources to ensure cost-effective and scalable data workflows. Automation & Scripting: Develop automated data workflows and scripts to improve operational efficiency using Python, SQL, or other scripting languages. Collaboration & Documentation: Work closely with data analysts, data scientists, and other engineering teams to ensure data availability, integrity, and quality. Document processes, architectures, and solutions clearly. Data Quality & Governance: Ensure the accuracy, consistency, and completeness of data. Implement and maintain data governance policies to ensure compliance and security standards are met. Troubleshooting & Support: Provide ongoing support for data pipelines and troubleshoot issues related to data integration, performance, and system reliability. Qualifications: Essential Skills: Experience: 8+ years of professional experience as a Data Engineer, with a strong background in building and optimizing data pipelines and working with large-scale datasets. AWS Experience: Hands-on experience with AWS cloud services, particularly S3, Lambda, Glue, Redshift, RDS, and EC2. ETL Processes: Strong understanding of ETL concepts, tools, and frameworks. Experience with data integration, cleansing, and transformation. Programming Languages: Proficiency in Python, SQL, and other scripting languages (e.g., Bash, Scala, Java). Data Warehousing: Experience with relational and non-relational databases, including data warehousing solutions like AWS Redshift, Snowflake, or similar platforms. Data Modeling: Experience in designing data models, schema design, and data architecture for analytical systems. Version Control & CI/CD: Familiarity with version control tools (e.g., Git) and CI/CD pipelines. Problem-Solving: Strong troubleshooting skills, with an ability to optimize performance and resolve technical issues across the data pipeline. Desirable Skills: Big Data Technologies: Experience with Hadoop, Spark, or other big data technologies. Containerization & Orchestration: Knowledge of Docker, Kubernetes, or similar containerization/orchestration technologies. Data Security: Experience implementing security best practices in the cloud and managing data privacy requirements. Data Streaming: Familiarity with data streaming technologies such as AWS Kinesis or Apache Kafka. Business Intelligence Tools: Experience with BI tools (Tableau, Quicksight) for visualization and reporting. Agile Methodology: Familiarity with Agile development practices and tools (Jira, Trello, etc.) Job Type: Permanent Pay: ₹3,000,000.00 - ₹3,800,000.00 per year Benefits: Work from home Schedule: Day shift Monday to Friday Experience: AWS Glue Catalog : 3 years (Required) Data Engineering : 5 years (Required) AWS CDK, Cloud-formation, Lambda, Step-function : 3 years (Required) AWS Elastic MapReduce (EMR): 3 years (Required) Work Location: In person

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Greater Chennai Area

On-site

Your work days are brighter here. At Workday, it all began with a conversation over breakfast. When our founders met at a sunny California diner, they came up with an idea to revolutionize the enterprise software market. And when we began to rise, one thing that really set us apart was our culture. A culture which was driven by our value of putting our people first. And ever since, the happiness, development, and contribution of every Workmate is central to who we are. Our Workmates believe a healthy employee-centric, collaborative culture is the essential mix of ingredients for success in business. That’s why we look after our people, communities and the planet while still being profitable. Feel encouraged to shine, however that manifests: you don’t need to hide who you are. You can feel the energy and the passion, it's what makes us unique. Inspired to make a brighter work day for all and transform with us to the next stage of our growth journey? Bring your brightest version of you and have a brighter work day here. About The Team The Data Platform and Observability team is based in Pleasanton,CA; Boston,MA; Atlanta, GA, Dublin, Ireland and Chennai, India. Our focus is on the development of large scale distributed data systems to support critical Workday products and provide real-time insights across Workday’s platforms, infrastructure and applications. The team provides platforms that process 100s of terabytes of data that enable core Workday products and use cases like core HCM, Fins, AI/ML skus, internal data products and Observability. If you enjoy writing efficient software or tuning and scaling large distributed systems you will enjoy working with us. Do you want to tackle exciting challenges at massive scale across private and public clouds for our 10000+ global customers? Do you want to work with world class engineers and facilitate the development of the next generation Distributed systems platforms? If so, we should chat. About The Role The Messaging, Streaming and Caching team is a full-service Distributed Systems Engineering team. We architect and provide async messaging, streaming, and NoSQL platforms and solutions that power the Workday products and SKUs ranging from core HCM, Fins, Integrations, and AI/ML. We develop client libraries and SDK’s that make it easy for teams to build Workday products. We develop automation to deploy and run hundreds of clusters, and we also operate and tune our clusters as well. As a team member you will play a key role in improving our services and encouraging their adoption within Workday's infrastructure both in our private cloud and public cloud. As a member of this team you will design and build new capabilities from inception to deployment to exploit the full power of the core middleware infrastructure and services, and work hand in hand with our application and service teams! Primary Responsibilities Design, build, and enhance critical distributed services, including Kafka, Redis, RabbitMQ etc. Design, develop, build, deploy and maintain core distributed services using a combination of open source and proprietary stacks across diverse infrastructure environments (Kubernetes, OpenStack, Bare Metal, etc.) Design and develop core software modules for streaming, messaging and caching. Construct observability modules, alerts and automation for Dashboard lifecycle management for the distributed services. Build, deploy and operate infrastructure components in production environments. Champion all aspects of streaming, messaging and caching with a focus on resiliency and operational excellence. Evaluate and implement new open-source and cloud-native tools and technologies as needed. Participate in the on-call rotation to support the distributed systems platforms. Manage and optimize Workday distributed services in AWS, GCP & Private cloud env. About You You are a senior software engineer with a distributed systems background and significant experience in distributed systems products liketKafka, Redis, RabbitMQ or Zookeeper. You have independently led product features and deployed on large scale NoSQL clusters. Basic Qualifications 4-12 years of software engineering experience using one or more of the following: Java/Scala, Golang. 4+ years of distributed systems experience 3+ years of development and DevOps experience in designing and operating large-scale deployments of distributed NoSQL & messaging systems. 1+ year of leading a NoSQL technology related product right from conception to deployment and maintenance. Preferred Qualifications a consistent track record of technical project leadership and success involving collaborators and interested partners across the enterprise. expertise in developing distributed system software and deployments that perform well and degrade gracefully under excessive load. hands-on experience with atleast one or more distributed systems technologies like Kafka/RabbitMQ, Redis, Cassandra experience learning complex open source service internals via code inspection. extensive experience with modern software development tools including CI/CD and methodologies like Agile expertise with configuration management using Chef and service deployment on Kubernetes via Helm and ArgoCD. experience with Linux system internals and tuning. experience with distributed system performance analysis and optimization. strong written and oral communication skills and the ability to explain esoteric technical details clearly to engineers without a similar background. Pursuant to applicable Fair Chance law, Workday will consider for employment qualified applicants with arrest and conviction records. Workday is an Equal Opportunity Employer including individuals with disabilities and protected veterans. Are you being referred to one of our roles? If so, ask your connection at Workday about our Employee Referral process!

Posted 2 weeks ago

Apply

0 years

3 - 8 Lacs

Gurgaon

On-site

Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Gurugram, Haryana, India; Bengaluru, Karnataka, India; Hyderabad, Telangana, India . Minimum qualifications: Bachelor's degree in Computer Science, Mathematics, a related technical field, or equivalent practical experience. Experience building Machine Learning or Data Science solutions. Experience writing software in Python, Scala, R, or similar. Experience with data structures, algorithms, and software design. Ability to travel up to 30% of the time. Preferred qualifications: Experience working with recommendation engines, data pipelines, or distributed machine learning, data analytics, data visualization techniques and software, and deep learning frameworks. Experience in software development, professional services, solution engineering, technical consulting, architecting and rolling out new technology and solution initiatives. Experience with core Data Science techniques. Knowledge of data warehousing concepts, including data warehouse technical architectures, infrastructure components, ETL/ELT and reporting/analytic tools and environments. Knowledge of cloud computing, including virtualization, hosted services, multi-tenant cloud infrastructures, storage systems, and content delivery networks. Excellent customer-facing communication and listening skills. About the job The Google Cloud Platform team helps customers transform and build what's next for their business — all with technology built in the cloud. Our products are developed for security, reliability and scalability, running the full stack from infrastructure to applications to devices and hardware. Our teams are dedicated to helping our customers — developers, small and large businesses, educational institutions and government agencies — see the benefits of our technology come to life. As part of an entrepreneurial team in this rapidly growing business, you will play a key role in understanding the needs of our customers and help shape the future of businesses of all sizes use technology to connect with customers, employees and partners. As a Cloud Engineer, you will play a key role in ensuring that customers have the best experience moving to the Google Cloud machine learning (ML) suite of products. You will design and implement machine learning solutions for customer use cases, leveraging core Google products. You will work with customers to identify opportunities to transform their business with machine learning, and will travel to customer sites to deploy solutions and deliver workshops designed to educate and empower customers to realize the full potential of Google Cloud. You will have access to Google’s technology to monitor application performance, debug and troubleshoot product code, and address customer and partner needs. In this role, you will lead the timely execution of adopting the Google Cloud Platform solutions to the customer’s requirements. Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Deliver effective big data and machine learning solutions and solve technical customer issues. Act as a technical advisor to Google’s customers. Identify new product features and feature gaps, provide guidance on existing product issues, and collaborate with Product Managers and Engineers to influence the roadmap of Google Cloud Platform. Deliver best practice recommendations, tutorials, blog articles, and technical presentations adapting to different levels of key business and technical stakeholders. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.

Posted 2 weeks ago

Apply

7.0 - 9.0 years

8 - 14 Lacs

Surat

Work from Office

Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards

Posted 2 weeks ago

Apply

7.0 - 9.0 years

8 - 14 Lacs

Varanasi

Work from Office

Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards

Posted 2 weeks ago

Apply

7.0 - 9.0 years

8 - 14 Lacs

Visakhapatnam

Work from Office

Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description YOU’LL BUILD TECH THAT EMPOWERS GLOBAL BUSINESSES Our Connect Technology teams are working on our new Connect platform, a unified, global, open data ecosystem powered by Microsoft Azure. Our clients around the world rely on Connect data and insights to innovate and grow. As a Senior Data Engineer, you’ll be part of a team of smart, highly skilled technologists who are passionate about learning and supporting cutting-edge technologies such as Spark, Scala, Pyspark, Databricks, Airflow, SQL, Docker, Kubernetes, and other Data engineering tools. These technologies are deployed using DevOps pipelines leveraging Azure, Kubernetes, Jenkins and Bitbucket/GIT Hub. WHAT YOU’LL DO: Develop, test, troubleshoot, debug, and make application enhancements leveraging, Spark, Pyspark, Scala, Pandas, Databricks, Airflow, SQL as the core development technologies Deploy application components using CI/CD pipelines Build utilities for monitoring and automating repetitive functions Collaborate with Agile cross-functional teams - internal and external clients including Operations, Infrastructure, Tech Ops Collaborate with Data Science team and productionize the ML Models Participate in a rotational support schedule to provide responses to customer queries and deploy bug fixes in a timely and accurate manner Qualifications 3-6 Years of years of applicable software engineering experience Strong fundamentals with experience in Bigdata technologies, Spark, Pyspark, Scala, Pandas, Databricks, Airflow, SQL Must have experience in cloud technologies, preferably Microsoft Azure Must have experience in performance optimization of Spark workloads Good to have experience with DevOps Technologies as GIT Hub, Kubernetes, Jenkins, Docker Good to have knowledge of relational databases, preferably PostgreSQL Excellent English communication skills, with the ability to effectively interface across cross-functional technology teams and the business Minimum B.S. degree in Computer Science, Computer Engineering or related field Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion

Posted 2 weeks ago

Apply

2.0 - 5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Summary From the newest ideas in cluster computing to the latest web framework, NetApp software products embrace innovation to deliver compelling solutions to our business. Be part of a team, where your ideas can make a difference and where you’ll be part of a collaborative, open-minded cult NetApp is looking for an Engineer to join our BlueXP software and application development team. BlueXP is our unified console and API namespace that offers a seamless experience across all our storage and data solutions. It is a unified control plane that provides global visibility and operational simplicity of storage and data services across on-premises and cloud environments. This is a great opportunity to work on a high-powered team delivering an industry changing product within an extremely high growth sector of the tech industry. Job Requirements Programming skills in NodeJS, Java/Scala /GO lang with understanding of OOPS, as well as scripting languages. (Python/Shell script). Experience with REST APIs. Experience in working on Linux platform. Understanding of concepts related to data structures and operating system fundamentals. Strong aptitude for learning new technologies. Strong verbal and written communication skills, including presentation skills to engage any audience. Creative and analytical approach to problem solving. Programming skills with multi-threading, complex algorithms and problem solving. Familiarity with Docker, Kubernetes and Cloud Technologies. Essential Functions:  A major part of your responsibility will be to use up-to-date technologies to complete projects as part of the development cycle including Coding, Design, Development, Debugging and Testing.  Participate in technical discussions within the team or other groups for evaluating and executing design and development plans for the product. Education We are seeking candidates that are pursuing master’s degree in computer science, Computer Engineering, Electrical/Electronic Engineering, Information Systems or an equivalent degree with 2-5 Years of experience preferred. At NetApp, we embrace a hybrid working environment designed to strengthen connection, collaboration, and culture for all employees. This means that most roles will have some level of in-office and/or in-person expectations, which will be shared during the recruitment process. Equal Opportunity Employer NetApp is firmly committed to Equal Employment Opportunity (EEO) and to compliance with all laws that prohibit employment discrimination based on age, race, color, gender, sexual orientation, gender identity, national origin, religion, disability or genetic information, pregnancy, and any protected classification. Why NetApp? We are all about helping customers turn challenges into business opportunity. It starts with bringing new thinking to age-old problems, like how to use data most effectively to run better - but also to innovate. We tailor our approach to the customer's unique needs with a combination of fresh thinking and proven approaches. We enable a healthy work-life balance. Our volunteer time off program is best in class, offering employees 40 hours of paid time off each year to volunteer with their favourite organizations. We provide comprehensive benefits, including health care, life and accident plans, emotional support resources for you and your family, legal services, and financial savings programs to help you plan for your future. We support professional and personal growth through educational assistance and provide access to various discounts and perks to enhance your overall quality of life. If you want to help us build knowledge and solve big problems, let's talk.

Posted 2 weeks ago

Apply

8.0 - 10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Position Summary MetLife established a Global capability center (MGCC) in India to scale and mature Data & Analytics, technology capabilities in a cost-effective manner and make MetLife future ready. The center is integral to Global Technology and Operations with a with a focus to protect & build MetLife IP, promote reusability and drive experimentation and innovation. The Data & Analytics team in India mirrors the Global D&A team with an objective to drive business value through trusted data, scaled capabilities, and actionable insights Role Value Proposition MetLife Global Capability Center (MGCC) is looking for a Senior Cloud data engineer who has the responsibility of building ETL/ELT, data warehousing and reusable components using Azure, Databricks and spark. He/She will collaborate with the business systems analyst, technical leads, project managers and business/operations teams in building data enablement solutions across different LOBs and use cases. Job Responsibilities Collect, store, process and analyze large datasets to build and implement extract, transfer, load (ETL) processes Develop metadata and configuration based reusable frameworks to reduce the development effort Develop quality code with integral performance optimizations in place right at the development stage. Collaborate with global team in driving the delivery of projects and recommend development and performance improvements. Extensive experience of various databases types and knowledge to leverage the right one for the need. Strong understanding of data tools and ability to leverage them to understand the data and generate insights Hands on experience in building/designing at-scale Data Lake, Data warehouses, data stores for analytics consumption On prem and Cloud (real time as well as batch use cases) Ability to interact with business analysts and functional analysts in getting the requirements and implementing the ETL solutions. Education, Technical Skills & Other Critical Requirement Education Bachelor’s degree in computer science, Engineering, or related discipline Experience 8 to 10 years of working experience on Azure Cloud using Databricks or Synapse Technical Skills Experience in transforming data using Python, Spark or Scala Technical depth in Cloud Architecture Framework, Lakehouse Architecture and One Lake solutions. Experience in implementing data ingestion and curation process using Azure with tools such as Azure Data Factory, Databricks Workflows, Azure Synapse, Cosmos DB, Spark (Scala/python), Data bricks . Experience in cloud optimized code on Azure using Databricks, Synapse dedicated SQL Pool and serverless Pools, Cosmos, SQL APIs loading and consumption optimizations. Scripting experience primarily on shell/bash/PowerShell would be desirable. Experience in writing SQL and performing data analysis skills for data anomaly detection and data quality assurance.

Posted 2 weeks ago

Apply

15.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Reference # 320433BR Job Type Full Time Your role Are you an analytic thinker? Do you enjoy creating valuable insights with data? Do you want to play a key role in transforming our firm into an agile organization? At UBS, we re-imagine the way we work, the way we connect with each other – our colleagues, clients and partners – and the way we deliver value. Being agile will make us more responsive, more adaptable, and ultimately more innovative. We’re looking for a Data Engineer to: transform data into valuable insights that inform business decisions, making use of our internal data platforms and applying appropriate analytical techniques engineer reliable data pipelines for sourcing, processing, distributing, and storing data in different ways, using data platform infrastructure effectively develop, train, and apply machine-learning models to make better predictions, automate manual processes, and solve challenging business problems ensure the quality, security, reliability, and compliance of our solutions by applying our digital principles and implementing both functional and non-functional requirements. build observability into our solutions, monitor production health, help to resolve incidents, and remediate the root cause of risks and issues understand, represent, and advocate for client needs Your team WMA Data Foundational Platforms & Services Crew is the fuel for the WMA CDIO which provides the foundational, disruptive, and modern platform and technologies. Our Mission is rooted in the value proposition of a shared, foundational platform across UBS to get the most business value. Your expertise bachelor’s degree in Computer Science, Engineering, or a related field 15+ Years of Experience in Strong proficiency with Azure cloud services related to data and analytics (Azure SQL, Data Lake, Data Factory, Databricks, etc.). experience with SQL and data modeling, as well as familiarity with NoSQL databases proficiency in programming languages such as Python or Scala knowledge of data warehousing and data lake concepts and technologies About Us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors. We have a presence in all major financial centers in more than 50 countries. How We Hire We may request you to complete one or more assessments during the application process. Learn more Join us At UBS, we know that it's our people, with their diverse skills, experiences and backgrounds, who drive our ongoing success. We’re dedicated to our craft and passionate about putting our people first, with new challenges, a supportive team, opportunities to grow and flexible working options when possible. Our inclusive culture brings out the best in our employees, wherever they are on their career journey. We also recognize that great work is never done alone. That’s why collaboration is at the heart of everything we do. Because together, we’re more than ourselves. We’re committed to disability inclusion and if you need reasonable accommodation/adjustments throughout our recruitment process, you can always contact us. Disclaimer / Policy Statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce.

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Kochi, Kerala, India

On-site

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. 7+ Yrs total experience in Data Engineering projects & 4+ years of relevant experience on Azure technology services and Python Azure : Azure data factory, ADLS- Azure data lake store, Azure data bricks, Mandatory Programming languages : Py-Spark, PL/SQL, Spark SQL Database : SQL DB Experience with Azure: ADLS, Databricks, Stream Analytics, SQL DW, COSMOS DB, Analysis Services, Azure Functions, Serverless Architecture, ARM Templates Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. Experience with object-oriented/object function scripting languages: Python, SQL, Scala, Spark-SQL etc. Data Warehousing experience with strong domain Preferred Education Master's Degree Required Technical And Professional Expertise Intuitive individual with an ability to manage change and proven time management Proven interpersonal skills while contributing to team effort by accomplishing related results as needed Up-to-date technical knowledge by attending educational workshops, reviewing publications Preferred Technical And Professional Experience Experience with Azure: ADLS, Databricks, Stream Analytics, SQL DW, COSMOS DB, Analysis Services, Azure Functions, Serverless Architecture, ARM Templates Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. Experience with object-oriented/object function scripting languages: Python, SQL, Scala, Spark-SQL etc

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Pune, Maharashtra, India

Remote

ZS is a place where passion changes lives. As a management consulting and technology firm focused on improving life and how we live it, our most valuable asset is our people. Here you’ll work side-by-side with a powerful collective of thinkers and experts shaping life-changing solutions for patients, caregivers and consumers, worldwide. ZSers drive impact by bringing a client first mentality to each and every engagement. We partner collaboratively with our clients to develop custom solutions and technology products that create value and deliver company results across critical areas of their business. Bring your curiosity for learning; bold ideas; courage and passion to drive life-changing impact to ZS. Our most valuable asset is our people . At ZS we honor the visible and invisible elements of our identities, personal experiences and belief systems—the ones that comprise us as individuals, shape who we are and make us unique. We believe your personal interests, identities, and desire to learn are part of your success here. Learn more about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. What You’ll Do Collaborate with client facing teams to understand solution context and contribute in technical requirement gathering and analysis Design and implement technical features leveraging best practices for technology stack being used Work with technical architects on the team to validate design and implementation approach Write production-ready code that is easily testable, understood by other developers and accounts for edge cases and errors Ensure highest quality of deliverables by following architecture/design guidelines, coding best practices, periodic design/code reviews Write unit tests as well as higher level tests to handle expected edge cases and errors gracefully, as well as happy paths Uses bug tracking, code review, version control and other tools to organize and deliver work Participate in scrum calls and agile ceremonies, and effectively communicate work progress, issues and dependencies Consistently contribute in researching & evaluating latest technologies through rapid learning, conducting proof-of-concepts and creating prototype solutions What You’ll Bring 2+ years of relevant hands-on experience CS foundation is must Strong command over distributed computing framework like Spark (preferred) or others. Strong analytical / problems solving Ability to quickly learn and become hands on new technology and be innovative in creating solutions Strong in at least one of the Programming languages - Python or Java, Scala, etc. and Programming basics - Data Structures Hands on experience in building modules for data management solutions such as data pipeline, orchestration, ingestion patterns (batch, real time) Experience in designing and implementation of solution on distributed computing and cloud services platform (but not limited to) - AWS, Azure, GCP Good understanding of RDBMS, with some exp on ETL is preferred Additional Skills: Understanding of DevOps, CI / CD, data security, experience in designing on cloud platform AWS Solutions Architect certification with understanding of broader AWS stack Knowledge of data modeling and data warehouse concepts Willingness to travel to other global offices as needed to work with client or other internal project teams Perks & Benefits: ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Travel: Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. Considering applying? At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. ZS is an equal opportunity employer and is committed to providing equal employment and advancement opportunities without regard to any class protected by applicable law. To Complete Your Application: Candidates must possess or be able to obtain work authorization for their intended country of employment.An on-line application, including a full set of transcripts (official or unofficial), is required to be considered. NO AGENCY CALLS, PLEASE. Find Out More At: www.zs.com

Posted 2 weeks ago

Apply

7.0 - 9.0 years

8 - 14 Lacs

Ludhiana

Work from Office

Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards

Posted 2 weeks ago

Apply

7.0 - 9.0 years

8 - 14 Lacs

Lucknow

Work from Office

Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards

Posted 2 weeks ago

Apply

12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Project Role : Data Architect Project Role Description : Define the data requirements and structure for the application. Model and design the application data structure, storage and integration. Must have skills : Microsoft Azure Analytics Services Good to have skills : NA Minimum 12 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Architect, you will define the data requirements and structure for the application. Your typical day will involve modeling and designing the application data structure, storage, and integration, ensuring that the architecture aligns with business needs and technical specifications. You will collaborate with various teams to ensure that data flows seamlessly and efficiently throughout the organization, while also addressing any challenges that arise in the data management process. Your role will be pivotal in shaping the data landscape of the organization, enabling informed decision-making and strategic planning. Roles & Responsibilities: A. Function as the Lead Data Architect for a small, simple project/proposal or as a team lead for medium/large sized project or proposal B. Discuss specific Big data architecture and related issues with client architect/team (in area of expertise) C. Analyze and assess the impact of the requirements on the data and its lifecycle D. Lead Big data architecture and design medium-big Cloud based, Big Data and Analytical Solutions using Lambda architecture. E. Breadth of experience in various client scenarios and situations F. Experienced in Big Data Architecture-based sales and delivery G. Thought leadership and innovation H. Lead creation of new data assets & offerings I. Experience in handling OLTP and OLAP data workloads Professional & Technical Skills: A. Strong experience in Azure is preferred with hands-on experience in two or more of these skills : Azure Synapse Analytics, Azure HDInsight, Azure Databricks with PySpark / Scala / SparkSQL, Azure Analysis Services B. Experience in one or more Real-time/Streaming technologies including: Azure Stream Analytics, Azure Data Explorer, Azure Time Series Insights, etc. C. Experience in handling medium to large Big Data implementations D. Candidate must have around 5 years of extensive Big data experience E. Candidate must have 15 years of IT experience and around 5 years of extensive Big data experience (design + build) Additional Information: A. Should be able to drive the technology design meetings, propose technology design and architecture B. Should have excellent client communication skills C. Should have good analytical and problem-solving skills

Posted 2 weeks ago

Apply

15.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Project Role : Data Architect Project Role Description : Define the data requirements and structure for the application. Model and design the application data structure, storage and integration. Must have skills : Microsoft Azure Analytics Services Good to have skills : NA Minimum 15 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Architect, you will define the data requirements and structure for the application. Your typical day will involve modeling and designing the application data structure, storage, and integration, ensuring that the architecture aligns with business needs and technical specifications. You will collaborate with various teams to ensure that data flows seamlessly and efficiently throughout the organization, while also addressing any challenges that arise in the data management process. Your role will be pivotal in shaping the data landscape of the organization, enabling informed decision-making and strategic planning. Roles & Responsibilities: A. Function as the Lead Data Architect for a small, simple project/proposal or as a team lead for medium/large sized project or proposal B. Discuss specific Big data architecture and related issues with client architect/team (in area of expertise) C. Analyze and assess the impact of the requirements on the data and its lifecycle D. Lead Big data architecture and design medium-big Cloud based, Big Data and Analytical Solutions using Lambda architecture. E. Breadth of experience in various client scenarios and situations F. Experienced in Big Data Architecture-based sales and delivery G. Thought leadership and innovation H. Lead creation of new data assets & offerings I. Experience in handling OLTP and OLAP data workloads Professional & Technical Skills: A. Strong experience in Azure is preferred with hands-on experience in two or more of these skills : Azure Synapse Analytics, Azure HDInsight, Azure Databricks with PySpark / Scala / SparkSQL, Azure Analysis Services B. Experience in one or more Real-time/Streaming technologies including: Azure Stream Analytics, Azure Data Explorer, Azure Time Series Insights, etc. C. Experience in handling medium to large Big Data implementations D. Candidate must have around 5 years of extensive Big data experience E. Candidate must have 15 years of IT experience and around 5 years of extensive Big data experience (design + build) Additional Information: A. Should be able to drive the technology design meetings, propose technology design and architecture B. Should have excellent client communication skills C. Should have good analytical and problem-solving skills

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

At eBay, we're more than a global ecommerce leader — we’re changing the way the world shops and sells. Our platform empowers millions of buyers and sellers in more than 190 markets around the world. We’re committed to pushing boundaries and leaving our mark as we reinvent the future of ecommerce for enthusiasts. Our customers are our compass, authenticity thrives, bold ideas are welcome, and everyone can bring their unique selves to work — every day. We're in this together, sustaining the future of our customers, our company, and our planet. Join a team of passionate thinkers, innovators, and dreamers — and help us connect people and build communities to create economic opportunity for all. Machine Learning Engineer (T25), Product Knowledge Do you love Big Data? Deploying Machine Learning models? Challenging optimization problems? Knowledgeable, collaborative co-workers? Come work at eBay and help us redefine global, online commerce! Who Are We? The Product Knowledge team is at the epicenter of eBay’s Tech-driven, Customer-centric overhaul. Our team is entrusted with creating and using eBay’s Product Knowledge - a vast Big Data system which is built up of listings, transactions, products, knowledge graphs, and more. Our team has a mix of highly proficient people from multiple fields such as Machine Learning, Data Science, Software Engineering, Operations, and Big Data Analytics. We have a strong culture of collaboration, and plenty of opportunity to learn, make an impact, and grow! What Will You Do We are looking for exceptional Engineers, who take pride in creating simple solutions to apparently-complex problems. Our Engineering tasks typically involve at least one of the following: Building a pipeline that processes up to billions of items, frequently employing ML models on these datasets Creating services that provide Search or other Information Retrieval capabilities at low latency on datasets of hundreds of millions of items Crafting sound API design and driving integration between our Data layers and Customer-facing applications and components Designing and running A/B tests in Production experiences in order to vet and measure the impact of any new or improved functionality If you love a good challenge, and are good at handling complexity - we’d love to hear from you! eBay is an amazing company to work for. Being on the team, you can expect to benefit from: A competitive salary - including stock grants and a yearly bonus A healthy work culture that promotes business impact and at the same time highly values your personal well-being Being part of a force for good in this world - eBay truly cares about its employees, its customers, and the world’s population, and takes every opportunity to make this clearly apparent Job Responsibilities Design, deliver, and maintain significant features in data pipelines, ML processing, and / or service infrastructure Optimize software performance to achieve the required throughput and / or latency Work with your manager, peers, and Product Managers to scope projects and features Come up with a sound technical strategy, taking into consideration the project goals, timelines, and expected impact Take point on some cross-team efforts, taking ownership of a business problem and ensuring the different teams are in sync and working towards a coherent technical solution Take active part in knowledge sharing across the organization - both teaching and learning from others Minimum Qualifications Passion and commitment for technical excellence B.Sc. or M.Sc. in Computer Science or an equivalent professional experience 2+ years of software design and development experience, tackling non-trivial problems in backend services and / or data pipelines A solid foundation in Data Structures, Algorithms, Object-Oriented Programming, Software Design, and core Statistics knowledge Experience in production-grade coding in Java, and Python/Scala Experience in the close examination of data and computation of statistics Experience in using and operating Big Data processing pipelines, such as: Hadoop and Spark Good verbal and written communication and collaboration skills Please see the Talent Privacy Notice for information regarding how eBay handles your personal data collected when you use the eBay Careers website or apply for a job with eBay. eBay is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, sexual orientation, gender identity, veteran status, and disability, or other legally protected status. If you have a need that requires accommodation, please contact us at talent@ebay.com. We will make every effort to respond to your request for accommodation as soon as possible. View our accessibility statement to learn more about eBay's commitment to ensuring digital accessibility for people with disabilities. The eBay Jobs website uses cookies to enhance your experience. By continuing to browse the site, you agree to our use of cookies. Visit our Privacy Center for more information.

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Position: Lead Data Engineer Experience: Must have 6+ years of experience About Role: We are looking for experienced Data engineers with excellent problem-solving skills to develop machine-learning powered Data Products designed to enhance customer experiences. About us: Nurtured from the seed of a single great idea - to empower the traveler - MakeMyTrip went on to pioneer India’s online travel industry Founded in the year 2000 by Deep Kalra, MakeMyTrip has since transformed how India travels. One of our most memorable moments has been to ring the bell at NASDAQ in 2010. Post-merger with the Ibibo group in 2017, we created a stronger identity and traction for our portfolio of brands, increasing the pace of product and technology innovations. Ranked amongst the LinkedIn Top 25 companies 2018. GO-MMT is the corporate entity of three giants in the Online Travel Industry—Goibibo, MakeMyTrip and RedBus. The GO-MMT family celebrates the compounded strengths of their brands. The group company is easily the most sought after corporate in the online travel industry. About the team: MakeMyTrip as India’s leading online travel company and provides petabytes of raw data which is helpful for business growth, analytical and machine learning needs. Data Platform Team is a horizontal function at MakeMyTrip to support various LOBs (Flights, Hotels, Holidays, Ground) and works heavily on streaming datasets which powers personalized experiences for every customer from recommendations to in-location engagement. Our team's key responsibilities are: Design, construct, and maintain robust data systems and architectures Develop and optimize data capture, storage, processing, serving, and querying platforms Create data products for personalization, recommendation, customer segmentation, and intelligence Enhance our Measurement platform for A/B experimentation Contribute to our Feature Store, an internal unified data analytics platform Participate in the development of our next-generation Travel Planner using Generative AI and Multi-Agent frameworks. Implement and optimize data solutions for travel-specific use cases, such as: Analyzing cross-city travel patterns to extend trip recommendations Identifying correlations between hotel bookings in different areas to suggest complementary destinations Required Skills and Experience: Extensive experience working with large data sets with hands-on technology skills to design and build robust data architecture At least 6+ years of hands-on experience in Spark/BigData Tech stack Expertise in stream processing engines like Spark Structured Streaming or Apache Flink Analytical processing on Big Data using Spark At least 4+ years of experience in Scala. Experience with Python will be a plus. Hands-on administration, configuration management, monitoring, performance tuning of Spark workloads, Distributed platforms, and JVM based systems At least 2+ years of cloud deployment experience – AWS | Azure | Google Cloud Platform 2 or more product deployments of big data technologies – Business Data Lake, NoSQL databases etc Awareness and decision-making ability to choose among various big data, NoSQL, analytics tools and technologies Should have experience in architecting and implementing domain centric big data solutions Ability to frame architectural decisions and provide technology leadership & direction Excellent problem solving, hands-on engineering, and communication skills At MakeMyTrip, we're committed to innovation and excellence in the travel industry. Join us in shaping the future of travel through data-driven solutions and advanced technologies. If you're passionate about leveraging data to create exceptional travel experiences, we want to hear from you!

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Thiruvananthapuram Taluk, India

On-site

Job Summary We are seeking a highly motivated and skilled Data Engineer with 3+ years of experience to join our growing data team. In this role, you will be instrumental in designing, building, and maintaining robust, scalable, and efficient data pipelines and infrastructure. You will work closely with data scientists, analysts, and other engineering teams to ensure data availability, quality, and accessibility for various analytical and machine learning initiatives. Key Responsibilities Design and Development: ○ Design, develop, and optimize scalable ETL/ELT pipelines to ingest, transform, and load data from diverse sources into data warehouses/lakes. ○ Implement data models and schemas that support analytical and reporting requirements. ○ Build and maintain robust data APIs for data consumption by various applications and services. Data Infrastructure: ○ Contribute to the architecture and evolution of our data platform, leveraging cloud services (AWS, Azure, GCP) or on-premise solutions. ○ Ensure data security, privacy, and compliance with relevant regulations. ○ Monitor data pipelines for performance, reliability, and data quality, implementing alerting and anomaly detection. Collaboration & Optimization: ○ Collaborate with data scientists, business analysts, and product managers to understand data requirements and translate them into technical solutions. ○ Optimize existing data processes for efficiency, cost-effectiveness, and performance. ○ Participate in code reviews, contribute to documentation, and uphold best practices in data engineering. Troubleshooting & Support: ○ Diagnose and resolve data-related issues, ensuring minimal disruption to data consumers. ○ Provide support and expertise to teams consuming data from the data platform. Required Qualifications Bachelor's degree in Computer Science, Engineering, or a related quantitative field. 3+ years of hands-on experience as a Data Engineer or in a similar role. Strong proficiency in at least one programming language commonly used for data engineering (e.g., Python, Java, Scala). Extensive experience with SQL and relational databases (e.g., PostgreSQL, MySQL, SQL Server). Proven experience with ETL/ELT tools and concepts. Experience with data warehousing concepts and technologies (e.g., Snowflake, Redshift, BigQuery, Azure Synapse, Data Bricks). Familiarity with cloud platforms (AWS, Azure, or GCP) and their data services (e.g., S3, EC2, Lambda, Glue, Data Factory, Blob Storage, BigQuery, Dataflow). Understanding of data modeling techniques (e.g., dimensional modeling, Kimball, Inmon). Experience with version control systems (e.g., Git). Excellent problem-solving, analytical, and communication skills. Preferred Qualifications Master's degree in a relevant field. Experience with Apache Spark (PySpark, Scala Spark) or other big data processing frameworks. Familiarity with NoSQL databases (e.g., MongoDB, Cassandra). Experience with data streaming technologies (e.g., Kafka, Kinesis). Knowledge of containerization technologies (e.g., Docker, Kubernetes). Experience with workflow orchestration tools (e.g., Apache Airflow, Azure Data Factory, AWS Step Functions). Understanding of DevOps principles as applied to data pipelines. Prior experience in Telecom is a plus. Skills: data modeling,etl/elt,version control,data science,gcp,postgresql, mysql, sql server,azure,scala,apache spark,aws,nosql databases,workflow orchestration,containerization,sql,java,etl/elt tools,cloud services,data streaming,data warehousing,python,python, java, scala

Posted 2 weeks ago

Apply

3.0 years

5 - 8 Lacs

Thiruvananthapuram Taluk, India

On-site

Job Title: Data Engineer 📍 Location: Trivandrum (Hybrid) 💼 Experience: 3+ Years 💰 Salary: Up to ₹8 LPA Job Summary We are hiring a skilled Data Engineer with 3+ years of experience to design, build, and optimize data pipelines and infrastructure. You will work closely with data scientists, analysts, and engineers to ensure reliable, scalable, and secure data delivery across cloud and on-prem systems. Key Responsibilities Design and develop ETL/ELT pipelines and implement data models for analytics/reporting. Build and maintain data APIs, ensuring data availability, security, and compliance. Develop and optimize data infrastructure on AWS, Azure, or GCP. Collaborate with stakeholders to gather requirements and deliver scalable data solutions. Monitor pipelines for performance and data quality; implement alerts and issue resolution. Participate in code reviews, documentation, and enforce engineering best practices. Mandatory Skills & Qualifications Bachelor’s in Computer Science, Engineering, or related field. 3+ years in Data Engineering or similar role. Strong knowledge in Python/Java/Scala, SQL, and relational databases (PostgreSQL, MySQL, etc.). Experience with ETL/ELT, data warehousing (Snowflake, Redshift, BigQuery, Synapse). Cloud experience in AWS, Azure, or GCP (e.g., S3, Glue, Data Factory, BigQuery). Knowledge of data modeling (Kimball/Inmon). Version control using Git. Strong problem-solving, communication, and collaboration skills. Preferred (Nice To Have) Master’s degree. Experience with Apache Spark, Kafka/Kinesis, NoSQL (MongoDB/Cassandra). Familiarity with Docker/Kubernetes, Airflow, and DevOps for data pipelines. Experience in Telecom domain is a plus. Skills: gcp,kinesis,aws,airflow,sql,pipelines,version control,elt,scala,docker,etl,apache spark,devops,azure,python,data modeling,java,design,data,infrastructure,git,nosql,data warehousing,kubernetes,cloud,kafka

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

As a Principal Data Engineer (Associate Director) at Fidelity in Bangalore, you will be an integral part of the ISS Data Platform Team. This team plays a crucial role in building and maintaining the platform that supports the ISS business operations. You will have the opportunity to lead a team of senior and junior developers, providing mentorship and guidance, while taking ownership of delivering a subsection of the wider data platform. Your role will involve designing, developing, and maintaining scalable data pipelines and architectures to facilitate data ingestion, integration, and analytics. Collaboration will be a key aspect of your responsibilities as you work closely with enterprise architects, business analysts, and stakeholders to understand data requirements, validate designs, and communicate progress. Your innovative mindset will drive technical advancements within the department, focusing on enhancing code reusability, quality, and developer productivity. By challenging the status quo and incorporating the latest data engineering practices and techniques, you will contribute to the continuous improvement of the data platform. Your expertise in leveraging cloud-based data platforms, particularly Snowflake and Databricks, will be essential in creating an enterprise lake house. Additionally, your advanced proficiency in the AWS ecosystem and experience with core AWS data services like Lambda, EMR, and S3 will be highly valuable. Experience in designing event-based or streaming data architectures using Kafka, along with strong skills in Python and SQL, will be crucial for success in this role. Furthermore, your role will involve implementing data access controls to ensure data security and performance optimization in compliance with regulatory requirements. Proficiency in CI/CD pipelines for deploying infrastructure and pipelines, experience with RDBMS and NOSQL offerings, and familiarity with orchestration tools like Airflow will be beneficial. Your soft skills, including problem-solving, strategic communication, and project management, will be key in leading problem-solving efforts, engaging with stakeholders, and overseeing project lifecycles. By joining our team at Fidelity, you will not only receive a comprehensive benefits package but also support for your wellbeing and professional development. We are committed to creating a flexible work environment that prioritizes work-life balance and motivates you to contribute effectively to our team. To explore more about our work culture and opportunities for growth, visit careers.fidelityinternational.com.,

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies