Jobs
Interviews

6304 Scala Jobs - Page 49

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 8.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Role Purpose The purpose of this role is to design, test and maintain software programs for operating systems or applications which needs to be deployed at a client end and ensure its meet 100% quality assurance parameters Do 1. Instrumental in understanding the requirements and design of the product/ software Develop software solutions by studying information needs, studying systems flow, data usage and work processes Investigating problem areas followed by the software development life cycle Facilitate root cause analysis of the system issues and problem statement Identify ideas to improve system performance and impact availability Analyze client requirements and convert requirements to feasible design Collaborate with functional teams or systems analysts who carry out the detailed investigation into software requirements Conferring with project managers to obtain information on software capabilities 2. Perform coding and ensure optimal software/ module development Determine operational feasibility by evaluating analysis, problem definition, requirements, software development and proposed software Develop and automate processes for software validation by setting up and designing test cases/scenarios/usage cases, and executing these cases Modifying software to fix errors, adapt it to new hardware, improve its performance, or upgrade interfaces. Analyzing information to recommend and plan the installation of new systems or modifications of an existing system Ensuring that code is error free or has no bugs and test failure Preparing reports on programming project specifications, activities and status Ensure all the codes are raised as per the norm defined for project / program / account with clear description and replication patterns Compile timely, comprehensive and accurate documentation and reports as requested Coordinating with the team on daily project status and progress and documenting it Providing feedback on usability and serviceability, trace the result to quality risk and report it to concerned stakeholders 3. Status Reporting and Customer Focus on an ongoing basis with respect to project and its execution Capturing all the requirements and clarifications from the client for better quality work Taking feedback on the regular basis to ensure smooth and on time delivery Participating in continuing education and training to remain current on best practices, learn new programming languages, and better assist other team members. Consulting with engineering staff to evaluate software-hardware interfaces and develop specifications and performance requirements Document and demonstrate solutions by developing documentation, flowcharts, layouts, diagrams, charts, code comments and clear code Documenting very necessary details and reports in a formal way for proper understanding of software from client proposal to implementation Ensure good quality of interaction with customer w.r.t. e-mail content, fault report tracking, voice calls, business etiquette etc Timely Response to customer requests and no instances of complaints either internally or externally Mandatory Skills: Scala programming. Experience: 5-8 Years.

Posted 3 weeks ago

Apply

4.0 - 6.0 years

6 - 10 Lacs

Chennai

Work from Office

As a Senior Cloud Data Platform (AWS) Specialist at Incedo, you will be responsible for designing, deploying and maintaining cloud-based data platforms on the AWS platform. You will work with data engineers, data scientists and business analysts to understand business requirements and design scalable, reliable and cost-effective solutions that meet those requirements. Roles & Responsibilities: Designing, developing and deploying cloud-based data platforms using Amazon Web Services (AWS) Integrating and processing large amounts of structured and unstructured data from various sources Implementing and optimizing ETL processes and data pipelines Developing and maintaining security and access controls Collaborating with other teams to ensure the consistency and integrity of data Troubleshooting and resolving data platform issues Technical Skills Skills Requirements: In-depth knowledge of AWS services and tools such as AWS Glue, AWS Redshift, and AWS Lambda Experience in building scalable and reliable data pipelines using AWS services, Apache Spark, and related big data technologies Familiarity with cloud-based infrastructure and deployment, specifically on AWS Strong knowledge of programming languages such as Python, Java, and SQL Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Provide leadership, guidance, and support to team members, ensuring the successful completion of tasks, and promoting a positive work environment that fosters collaboration and productivity, taking responsibility of the whole team. Nice-to-have skills Qualifications Qualifications 4-6 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred

Posted 3 weeks ago

Apply

5.0 - 8.0 years

3 - 7 Lacs

Hyderabad

Work from Office

Long Description Experienceand Expertise inany of the followingLanguagesat least 1 of them : Java, Scala, Python Experienceand expertise in SPARKArchitecture Experience in the range of 6-10 yrs plus Good Problem SolvingandAnalytical Skills Ability to Comprehend the Business requirementand translate to the Technical requirements Good communicationand collaborative skills with fellow teamandacross Vendors Familiar with development of life cycle includingCI/CD pipelines. Proven experienceand interested in supportingexistingstrategicapplications Familiarity workingwithagile methodology Mandatory Skills: Scala programming.: Experience: 5-8 Years.

Posted 3 weeks ago

Apply

5.0 - 8.0 years

6 - 10 Lacs

Pune

Hybrid

Mandatory Skills: Cloud-PaaS-GCP-Google Cloud Platform . Location: Wipro PAN India Hybrid 3 days in Wipro office JD: Strong - SQL Strong - Python Any cloud technology (AWS, azure, GCP etc) have to be excellent GCP (preferred) PySpark (preferred) Essential Skills: Proficiency in Cloud-PaaS-GCP-Google Cloud Platform. Experience Required: 5-8 years. Position: Cloud Data Engineer. Work Location: Wipro, PAN India. Work Arrangement: Hybrid model with 3 days in Wipro office. Additional Experience: 8-13 years. Job Description: - Strong expertise in SQL. - Proficient in Python. - Excellent knowledge of any cloud technology (AWS, Azure, GCP, etc.), with a preference for GCP. - Familiarity with PySpark is preferred. Mandatory Skills: Cloud-PaaS-GCP-Google Cloud Platform . JD: Strong - SQL Strong - Python Any cloud technology (AWS, azure, GCP etc) have to be excellent GCP (preferred) PySpark (preferred

Posted 3 weeks ago

Apply

5.0 - 10.0 years

7 - 11 Lacs

Bengaluru

Work from Office

Senior Software Engineer Data Were seeking a Senior Software Engineer or a Lead Software Engineer to join one of our Data Layer teams. As the name implies, the Data Layer is at the core of all things data at Zeta. Our responsibilities include: Developing and maintaining the Zeta Identity Graph platform, which collects billions of behavioural, demographic, locations and transactional signals to power people-based marketing. Ingesting vast amounts of identity and event data from our customers and partners. Facilitating data transfers across systems. Ensuring the integrity and health of our datasets. And much more. As a member of this team, the data engineer will be responsible for designing and expanding our existing data infrastructure, enabling easy access to data, supporting complex data analyses, and automating optimization workflows for business and marketing operations. Essential Responsibilities: As a Senior Software Engineer or a Lead Software Engineer, your responsibilities will include: Building, refining, tuning, and maintaining our real-time and batch data infrastructure Daily use technologies such as Spark, Airflow, Snowflake, Hive, Scylla, Django, FastAPI, etc. Maintaining data quality and accuracy across production data systems Working with Data Engineers to optimize data models and workflows Working with Data Analysts to develop ETL processes for analysis and reporting Working with Product Managers to design and build data products Working with our DevOps team to scale and optimize our data infrastructure Participate in architecture discussions, influence the road map, take ownership and responsibility over new projects Participating in on-call rotation in their respective time zones (be available by phone or email in case something goes wrong) Desired Characteristics: Minimum 5-10 years of software engineering experience. Proven long term experience and enthusiasm for distributed data processing at scale, eagerness to learn new things. Expertise in designing and architecting distributed low latency and scalable solutions in either cloud and on-premises environment. Exposure to the whole software development lifecycle from inception to production and monitoring Fluency in Python or solid experience in Scala, Java Proficient with relational databases and Advanced SQL Expert in usage of services like Spark and Hive Experience with web frameworks such as Flask, Django Experience in adequate usage of any scheduler such as Apache Airflow, Apache Luigi, Chronos etc. Experience in Kafka or any other stream message processing solutions. Experience in adequate usage of cloud services (AWS) at scale Experience in agile software development processes Excellent interpersonal and communication skills Nice to have: Experience with large scale / multi-tenant distributed systems Experience with columnar / NoSQL databases Vertica, Snowflake, HBase, Scylla, Couchbase Experience in real team streaming frameworks Flink, Storm Experience in open table formats such as Iceberg, Hudi or Deltalake

Posted 3 weeks ago

Apply

12.0 - 18.0 years

40 - 75 Lacs

Bengaluru

Hybrid

- Backend Applications using Java/J2EE, RESTful Web Services, HTTP and JSON - 5 yrs. of Techno Managerial role - Expertise in Python & Java, with a deep understanding of its ecosystems and frameworks. - Expertise with Node.js / JavaScript / Scala

Posted 3 weeks ago

Apply

5.0 - 10.0 years

30 - 32 Lacs

Pune

Hybrid

Let me tell you about the role We are looking for an Information Security Engineering Specialist with great knowledge in security fundamentals and is eager to apply them in complex environments. In this role, you will assist in implementing security controls, executing vulnerability assessments, and supporting automation initiatives. This position will have an emphasis in one or more of the following areas cloud security; infrastructure security; and/or data security. You will have an opportunity to learn and grow under the mentorship of senior engineers, while also contributing to critical security tasks that keep our organization safe. What you will deliver Define security policies that can be used to improve our cloud, infrastructure or data security posture. Integrate our vulnerability assessment tooling into our environments, to provide continuous scans, uncovering vulnerabilities, misconfiguration or potential security gaps. Work with engineering teams to support the remediation and validation of vulnerability mitigations and fixes. Integrate security validations into continuous integration/continuous deliver (CI/CD) pipelines and develop scripts to automate security tasks. Maintain clear, detailed documentation of security procedures and policies, including how to embed and measure security on our cloud, infrastructure or data environments. What you will need to be successful (experience and qualifications) Seasoned security professional with 3+ years delivering security engineering services and/or building security solutions within a complex organization. Practical experience designing, planning, productizing, maintaining and documenting reliable and scalable data, infrastructure, cloud and/or platform solutions in complex environments. Firm foundation of information and cyber security principles and standard processes. Professional and technical security certifications such as CISSP, CISM, GEVA, CEH, OSCP or equivalent are a plus. Development experience in one or more object-oriented programming languages (e.g., Python, Scala, Java, C#) and/or cloud environments (including AWS, Azure, Alibaba, etc.) Exposure/experience with full stack development. Experience with security tooling (vulnerability scanners, CNAPP, Endpoint and/or DLP) and automation and scription for security tasks (e.g., CI/CD integration). Familiarity with basic security frameworks such as NIST CSF, NIST 800-53, ISO 27001, etc. Foundational knowledge of security standards, industry laws, and regulations such as Payment Card Industry Data Security Standards (PCI-DSS), General Data Protection Regulation (GDPR), California Consumer Privacy Act (CCPA) and Sarbanes-Oxley (SOX) Continuous learning and improvement approach. This position is a hybrid of office/remote working

Posted 3 weeks ago

Apply

7.0 - 12.0 years

9 - 12 Lacs

Bengaluru

Work from Office

Responsibilities: * Design, develop, test & maintain Scala applications using Spark. * Collaborate with cross-functional teams on project delivery. * Optimize application performance through data analysis.

Posted 3 weeks ago

Apply

2.0 - 7.0 years

15 - 30 Lacs

Bengaluru

Hybrid

Role & responsibilities Design, deliver, and maintain significant features in data pipelines, ML processing, and / or service infrastructure Optimize software performance to achieve the required throughput and / or latency Work with your manager, peers, and Product Managers to scope projects and features Come up with a sound technical strategy, taking into consideration the project goals, timelines, and expected impact Take point on some cross-team efforts, taking ownership of a business problem and ensuring the different teams are in sync and working towards a coherent technical solution Take active part in knowledge sharing across the organization - both teaching and learning from others 2+ years of software design and development experience, tackling non-trivial problems in backend services and / or data pipelines A solid foundation in Data Structures, Algorithms, Object-Oriented Programming, Software Design, and core Statistics knowledge Experience in production-grade coding in Java, and Python/Scala Experience in the close examination of data and computation of statistics Experience in using and operating Big Data processing pipelines, such as: Hadoop and Spark Good verbal and written communication and collaboration skills

Posted 3 weeks ago

Apply

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About The Role Grade Level (for internal use): 10 Role: Sr. React Fullstack Developer The Team C&RS (Credit & Risk Solutions) is part of the Market Intelligence group within S&P Global. Financial Risk Analytics (FRA) delivers information-centric capital markets and risk solutions for trading desks and their risk business partners, supporting risk regulatory compliance. The UI products cover counterparty credit risk, xVA and market risk for both Buy and Sell side firms. We are currently investing in technology and data platform to develop a number of new revenue generating products, leveraging open-source, big data and cloud technologies. This role is for a software developer within the FRA software engineering team, building React (Typescript) UI applications, services and working with databases/cloud. Responsibilities Design and implement UI applications and services. Participate in system architecture and design decisions. Continuously improve development and testing best practices. Interpret and analyse business use-cases and translate feature requests into technical designs and development tasks. Take ownership of development tasks, participate in regular design and code review meetings. Delivery focused and keen to participate in the successful implementation and evolution of technology products in close coordination with product managers and colleagues. Basic Qualification Bachelor’s degree in Computer Science, Applied Mathematics, Engineering, or a related discipline, or equivalent experience. 10 + years of strong software development experience React, Typescript/js (ES6) Node.js (express) Experience with SQL relational databases such as Postgresql Demonstrable experience of using Restful API in a production setting. Test frameworks (e.g. jest, jasmine, playwright) Understanding of CI/CD pipelines Linux/Unix, Git Agile and XP (Scrum, Kanban, TDD) Desirable Highcharts, Devextreme, tanstack React Components, Bootstrap, HTML5 Understanding and implementation of security and data protection Gitlab, containerization platform AWS - CLI, Cloudfront, Cognito, S3 Python, Java/Scala What's In For You You can effectively manage timelines and enjoy working within a team You can follow relevant technology trends, actively evaluate new technologies, and use this information to improve the product You get a lot of satisfaction from on-time delivery Happy clients are important to you You take pride in your work Competencies You love to solve complex problems, whether that's making the user experience as responsive as possible or understanding complex client requirements You can confidently present your own ideas and solutions, as well as guide technical discussions. Your welcoming attitude encourages people to approach you when they have a problem you can help them solve About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence. What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 284397 Posted On: 2025-07-18 Location: Gurgaon, India

Posted 3 weeks ago

Apply

5.0 years

15 - 18 Lacs

Goregaon, Maharashtra, India

Remote

Business Intelligence Developer – Mumbai (Goregaon East) 27165 Work Mode: Hybrid (4 days office, 1 day WFH) Shift Timings: 12:30 PM – 9:30 PM Location: Goregaon East, Nesco (Max 1 hour commute preferred) Interview: 2 rounds, in-person Responsibilities Design and develop ETL pipelines integrating diverse data sources into BI environments. Develop dashboards and reports using Microsoft Power BI, SSRS, and other BI tools. Ensure data quality, maintain data catalog/dictionary, and support data marts/lakes. Collaborate with business partners to understand needs and translate them into BI solutions. Lead development and maintenance of complex BI dashboards and reports. Provide user training and support adoption of BI tools. Proactively identify opportunities for business growth, risk mitigation, and efficiency. Support Microsoft BI platform technologies and innovate solutions for scalability and reuse. Must-Have Skills & Experience 5-7+ years working with Microsoft BI platform: SQL Server DB, SSIS, SSRS, SSAS, Power BI, Azure Cloud services. Strong experience building and maintaining large scale data integration and ETL processes. Proficient in data warehouse architecture, data modeling, and dashboard/report development. Expertise in optimizing data integration routines and database design. Excellent communication and documentation skills. Ability to work independently in a fast-paced environment. Nice-to-Haves Experience with other BI tools like QlikView, Tableau, MicroStrategy, or open-source reporting. Cloud-based data platforms (Azure, AWS, Snowflake). DevOps experience and CI/CD deployment knowledge. Experience with data lakes and Power BI Report Server administration. Knowledge of analytics tools like R, Python, Scala, SAS. Skills: power bi,business intelligence,communication,dashboards,azure,data,dashboard/report development,ssas,documentation,ssrs,data warehouse architecture,etl,cloud,data integration,ssis,sql server db,data modeling,azure cloud services,design,microsoft

Posted 3 weeks ago

Apply

10.0 - 15.0 years

35 - 40 Lacs

Bengaluru

Work from Office

About AQR Capital Management AQR is a global investment management firm built at the intersection of financial theory and practical application. We strive to deliver superior, long-term results for our clients by seeking to filter out market noise to identify and isolate what matters most, and by developing ideas that stand up to rigorous testing. Underpinning this philosophy is an unrelenting commitment to excellence in technology powering our insights and analysis. This unique combination has made us leaders in alternative and traditional strategies with more than 125 Bn$ of asset under management. Job description: The Team Our Bengaluru office is key component of our global Engineering strategy. Our Software engineers work in research, portfolio implementation, trading, enterprise engineering teams. Quantitative Research Development (QRD) team partners closely with business teams to build the quant models, infrastructure, applications, tools that power our quantitative research and quantitative investment process. Portfolio Implementation team is part of QRD team. Your Role As a Tech Lead in Portfolio Implementation Engineering team you will design and develop - Global asset risk estimation system incorporating large amount of data High-performance historical simulation engine Portfolio construction systems for our quantitative investment strategies Portfolio optimization systems to incorporate real world constraints on research strategies Solutions to implement business processes that rebalance the portfolios based on quantitative models and interface with trading systems to generate orders You will partner with not only local but also global team of engineers and researchers for successful product delivery. You would be expected to lead initiatives both in technology transformation and business driven projects along with significant individual contribution along with guiding and mentoring junior team members. What You ll Bring Bachelors/Masters/PhD in Computer Science, Engineering, or related discipline 10+ years of software development experience Expertise in Java programming language Outstanding coding, debugging, and analytical skills Experience of design and architecture including object-oriented design, distributed systems, cloud native applications and microservices Ability to lead technology initiatives through the development lifecycle s Ability to manage multiple workstreams with task allocation, execution and monitoring Ability to manage teams and guide team members Experience of working with cloud technologies and containers would be a plus Knowledge of other programming languages (Python, C++, Go, Scala) would be a plus Knowledge and experience of Finance is desirable Excellent communication skills both verbal and written Willingness to learn and work on new technologies and domain concepts Who You Are Mature, thoughtful, and a natural fit for a collaborative, team-oriented culture Hard-working and eager to learn in a fast-paced, innovative environment Committed to intellectual integrity, transparency, and openness Motivated by the transformational effects of technology-at-scale

Posted 3 weeks ago

Apply

6.0 - 11.0 years

8 - 14 Lacs

Bengaluru

Work from Office

GCP Cloud Architecture. Model Deployment Lifecycle Knowledge of creating Training & Serving Pipeline Familiar with any one of workflow: Kubeflow, Airflow, ML Flow, Argo etc" Strong in Python Adequate SQL skill Must have skill : Python, SQL, ML Engineer (Model Deployment/MLOPS), ML Pipeline-(Kubeflow, Airflow Flow, Argo etc,) Preferred Skill: Pytorch, TensorFlow, Exp in hiper scaler/Cloud Service, Deep learning framework,

Posted 3 weeks ago

Apply

7.0 - 12.0 years

12 - 16 Lacs

Bengaluru

Work from Office

We are looking for lead or principal software engineers to join our Data Cloud team. Our Data Cloud team is responsible for the Zeta Identity Graph platform, which captures billions of behavioural, demographic, environmental, and transactional signals, for people-based marketing. As part of this team, the data engineer will be designing and growing our existing data infrastructure to democratize data access, enable complex data analyses, and automate optimization workflows for business and marketing operations. Job Description: Essential Responsibilities: As a Lead or Principal Data Engineer, your responsibilities will include: Building, refining, tuning, and maintaining our real-time and batch data infrastructure Daily use technologies such as HDFS, Spark, Snowflake, Hive, HBase, Scylla, Django, FastAPI, etc. Maintaining data quality and accuracy across production data systems Working with Data Engineers to optimize data models and workflows Working with Data Analysts to develop ETL processes for analysis and reporting Working with Product Managers to design and build data products Working with our DevOps team to scale and optimize our data infrastructure Participate in architecture discussions, influence the road map, take ownership and responsibility over new projects Participating in 24/7 on-call rotation (be available by phone or email in case something goes wrong) Desired Characteristics: Minimum 7 years of software engineering experience. Proven long term experience and enthusiasm for distributed data processing at scale, eagerness to learn new things. Expertise in designing and architecting distributed low latency and scalable solutions in either cloud and onpremises environment. Exposure to the whole software development lifecycle from inception to production and monitoring Fluency in Python or solid experience in Scala, Java Proficient with relational databases and Advanced SQL Expert in usage of services like Spark, HDFS, Hive, HBase Experience in adequate usage of any scheduler such as Apache Airflow, Apache Luigi, Chronos etc. Experience in adequate usage of cloud services (AWS) at scale Experience in agile software development processes Excellent interpersonal and communication skills Nice to have: Experience with large scale / multi-tenant distributed systems Experience with columnar / NoSQL databases Vertica, Snowflake, HBase, Scylla, Couchbase Experience in real team streaming frameworks Flink, Storm Experience with web frameworks such as Flask, Django

Posted 3 weeks ago

Apply

5.0 - 10.0 years

25 - 30 Lacs

Gurugram

Hybrid

Job Title: Lead Data Engineer Location: Gurgaon Department: Data Engineering / Technology Experience Required: 510 years (with 1–2 years in a lead role preferred) About the Role: We are looking for a highly skilled and motivated Lead Data Engineer to join our growing data team. The ideal candidate will have hands-on experience in designing, building, and optimizing scalable data pipelines and architectures. You will work closely with data scientists, analysts, and product teams to enable data-driven decisions across the organization. Key Responsibilities: Design, develop, and maintain large-scale distributed data processing systems using Spark (on EMR) and Scala Build and manage real-time data pipelines with Apache Kafka Leverage SQL , Athena , and other AWS data tools for efficient data querying and transformation Orchestrate workflows using Apache Airflow Deploy and manage infrastructure using AWS components (e.g., EKS , EMR, S3, etc.) Collaborate with stakeholders to build interactive dashboards and reports using Superset or other data visualization tools Ensure high-quality data availability, integrity, and governance across systems Provide technical leadership, code reviews, and mentoring to junior engineers Required Skills: Strong experience with Apache Spark and Scala Hands-on expertise in Apache Kafka for streaming data solutions Strong command of SQL ; experience with Athena is a plus Experience working with AWS services (especially EMR , EKS , S3 , etc.) Experience with Airflow for job scheduling and orchestration Familiarity with Superset or similar data visualization tools (e.g., Tableau, Power BI) Understanding of data warehouse technologies like Hive and Presto (deep expertise not required if strong in SQL) Preferred Qualifications: Prior experience in a leadership or mentoring role Exposure to best practices in data engineering, data governance, and security Strong problem-solving skills and ability to work in a fast-paced environment What We Offer: Opportunity to work on cutting-edge data technologies Collaborative and inclusive work culture

Posted 3 weeks ago

Apply

8.0 - 13.0 years

10 - 14 Lacs

Bengaluru

Work from Office

About the Role: Grade Level (for internal use): 11 S&P Global Mobility The RoleLead Data Engineer( AWS Cloud, Python) We are seeking a Senior Data Engineer with deep expertise in AWS Cloud Development to join our fast-paced data engineering organization. This role is critical to both the development of new data products and the modernization of existing platforms. The ideal candidate is a seasoned data engineer with hands-on experience designing, building, and optimizing large-scale data pipelines and architectures in both on-premises (e.g., Oracle) and cloud environments (especially AWS). This individual will also serve as a Cloud Development expert , mentoring and guiding other data engineers as they enhance their cloud skillsets. Responsibilities Data Engineering & Architecture Design, build, and maintain scalable data pipelines and data products. Develop and optimize ELT/ETL processes using a variety of data tools and technologies. Support and evolve data models that drive operational and analytical workloads. Modernize legacy Oracle-based systems and migrate workloads to cloud-native platforms. Cloud Development & DevOps (AWS-Focused) Build, deploy, and manage cloud-native data solutions using AWS services (e.g., S3, Lambda, Glue, EMR, Redshift, Athena, Step Functions). Implement CI/CD pipelines, IaC (e.g., Terraform or CloudFormation), and monitor cloud infrastructure for performance and cost optimization. Ensure data platform security, scalability, and resilience in the AWS cloud. Technical Leadership & Mentoring Act as a subject matter expert on cloud-based data development and DevOps best practices. Mentor data engineers on AWS architecture, infrastructure as code, and cloud-first design patterns. Participate in code and architecture reviews, enforcing best practices and high-quality standards. Cross-functional Collaboration Work closely with product managers, data analysts, software engineers, and other stakeholders to understand business needs and deliver end-to-end solutions. Support and evolve the roadmap for data platform modernization and new product delivery. What We're looking for: Required Qualifications 8+ years of experience in data engineering or equivalent technical role. 5+ years of hands-on experience with AWS Cloud Development and DevOps. Strong expertise in SQL , data modeling , and ETL/ELT pipelines . Deep experience with Oracle (PL/SQL, performance tuning, data extraction). Proficiency in Python and/or Scala for data processing tasks. Strong knowledge of cloud infrastructure (networking, security, cost optimization). Experience with infrastructure as code (Terraform). Familiarity with CI/CD pipelines and DevOps tooling (e.g., Jenkins, GitHub Actions). Preferred (Nice to Have) Experience with Google Cloud Platform (GCP), Snowflake Knowledge of containerization and orchestration tools. Experience with modern orchestration tools (e.g., Airflow, dbt). Exposure to data cataloging, governance, and quality tools. Statement: S&P Global delivers essential intelligence that powers decision making. We provide the worlds leading organizations with the right data, connected technologies and expertise they need to move ahead. As part of our team, youll help solve complex challenges that equip businesses, governments and individuals with the knowledge to adapt to a changing economic landscape. S&P Global Mobility turns invaluable insights captured from automotive data to help our clients understand todays market, reach more customers, and shape the future of automotive mobility. About S&P Global Mobility At S&P Global Mobility, we provide invaluable insights derived from unmatched automotive data, enabling our customers to anticipate change and make decisions with conviction. Our expertise helps them to optimize their businesses, reach the right consumers, and shape the future of mobility. We open the door to automotive innovation, revealing the buying patterns of today and helping customers plan for the emerging technologies of tomorrow. For more information, visit www.spglobal.com/mobility . Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, pre-employment training or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. ---- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ---- 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.2 - Middle Professional Tier II (EEO Job Group), SWP Priority Ratings - (Strategic Workforce Planning)

Posted 3 weeks ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About the Role: We are seeking a highly skilled and experienced Machine Learning Engineer to join our dynamic team. As a Machine Learning Engineer, you will be responsible for the design, development, deployment, and maintenance of machine learning models and systems that drive our [mention specific business area or product, e.g., recommendation engine, fraud detection system, autonomous vehicles]. You will work closely with data scientists, software engineers, and product managers to translate business needs into scalable and reliable machine learning solutions. This is a key role in shaping the future of CBRE and requires a strong technical foundation combined with a passion for innovation and problem-solving. Responsibilities: Model Development & Deployment: Design, develop, and deploy machine learning models using various algorithms (e.g., regression, classification, clustering, deep learning) to solve complex business problems. Select appropriate datasets and features for model training, ensuring data quality and integrity. Implement and optimize model training pipelines, including data preprocessing, feature engineering, model selection, and hyperparameter tuning. Deploy models to production environments using containerization technologies (e.g.,Docker, Kubernetes) and cloud platforms (e.g., AWS, GCP, Azure). Monitor model performance in production, identify and troubleshoot issues, and implement model retraining and updates as needed. Infrastructure & Engineering: Develop and maintain APIs for model serving and integration with other systems. Write clean, well-documented, and testable code. Collaborate with software engineers to integrate models into existing products and services. Research & Innovation : Stay up to date with the latest advancements in machine learning and related technologies. Research and evaluate new algorithms, tools, and techniques to improve model performance and efficiency. Contribute to the development of new machine learning solutions and features. Proactively identify opportunities to leverage machine learning to solve business challenges. Collaboration & Communication: * Collaborate effectively with data scientists, software engineers, product managers, and other stakeholders. * Communicate technical concepts and findings clearly and concisely to both technical and non-technical audiences. * Participate in code reviews and contribute to the team's knowledge sharing. Qualifications: * Experience : 7+ years of experience in machine learning engineering or a related field. Technical Skills: Programming Languages : Proficient in Python and experience with other languages (e.g., Java, Scala, R) is a plus. Machine Learning Libraries : Strong experience with machine learning libraries and frameworks such as scikit-learn, TensorFlow, PyTorch, Keras, etc. Data Processing : Experience with data manipulation and processing using libraries like Pandas, NumPy, and Spark. Model Deployment : Experience with model deployment frameworks and platforms (e.g., TensorFlow Serving, TorchServe, Seldon, AWS SageMaker, Google AI Platform, Azure Machine Learning). Databases : Experience with relational and NoSQL databases (e.g., SQL, MongoDB, Cassandra). Version Control : Experience with Git and other version control systems. DevOps : Familiarity with DevOps practices and tools. Strong understanding of machine learning concepts and algorithms : Regression, Classification, Clustering, Deep Learning etc. Soft Skills: Excellent problem-solving and analytical skills. Strong communication and collaboration skills.

Posted 3 weeks ago

Apply

6.0 - 11.0 years

11 - 16 Lacs

Noida

Work from Office

Data Engineering- Technical Lead Paytm is Indias leading digital payments and financial services company, which is focused on driving consumers and merchants to its platform by offering them a variety of payment use cases. Paytm provides consumers with services like utility payments and money transfers, while empowering them to pay via Paytm Payment Instruments (PPI) like Paytm Wallet, Paytm UPI, Paytm Payments Bank Netbanking, Paytm FASTag and Paytm Postpaid - Buy Now, Pay Later. To merchants, Paytm offers acquiring devices like Soundbox, EDC, QR and Payment Gateway where payment aggregation is done through PPI and also other banks financial instruments. To further enhance merchants business, Paytm offers merchants commerce services through advertising and Paytm Mini app store.Operating on this platform leverage, the company then offers credit services such as merchant loans, personal loans and BNPL, sourced by its financial partners. About the Role: This position requiressomeone to work on complex technical projects and closely work with peers in an innovative andfast-paced environment. For this role, we require someone with a strong product design sense & specialized in Hadoop and Spark technologies. Requirements: Minimum 6+ years of experience in Big Data technologies. The position Grow our analytics capabilities with faster, more reliabletools, handling petabytes ofdataevery day. Brainstorm and create new platforms that can help in our quest to makeavailable to cluster users in all shapes and forms, with low latency and horizontalscalability. Make changes to ourdiagnosing any problems across the entire technical stack. Design and develop a real-time events pipeline forDataingestion for real-time dash-boarding.Develop complex and efficient functions to transform rawdatasources into powerful,reliable components of ourdatalake. Design & implement new components and various emerging technologies in HadoopEco- System, and successful execution of various projects. Be a brand ambassador for Paytm- Stay Hungry, Stay Humble, Stay Relevant! Skills that will help you succeed in this role: Fluent withStrong hands-on experience with Hadoop, MapReduce, Hive, Spark, PySpark etc.Excellent programming/debugging skills in Python/Java/Scala. Experience with any scripting language such as Python, Bash etc. Good to have experience of working with noSQL databases like Hbase, Cassandra.Hands-on programming experience with multithreaded applications.Good to have experience in Database, SQL, messaging queues like Kafka. Good to have experience in developing streaming applications e.g. Spark Streaming,Flink, Storm, etc.Good to have experience with AWS and cloud technologies such as S3 Experience with caching architectures like Redis etc. Why join us: Because you get an opportunity to make a difference, and have a great time doing that.You are challenged and encouraged here to do stuff that is meaningful for you and for those we serve.You should work with us if you think seriously about what technology can do for people.We are successful, and our successes are rooted in our people's collective energy and unwavering focus on the customer, and that's how it will always be. Compensation: If you are the right fit, we believe in creating wealth for you with enviable 500 mn+ registered users, 21 mn+ merchants and depth of data in our ecosystem, we are in a unique position to democratize credit for deserving consumers & merchants- and we are committed to it. Indias largest digital lending story is brewing here. Its your opportunity to be a part of the story!

Posted 3 weeks ago

Apply

5.0 - 9.0 years

2 - 5 Lacs

Noida

Work from Office

Paytm is Indias leading digital payments and financial services company, which is focused on driving consumers and merchants to its platform by offering them a variety of payment use cases. Paytm provides consumers with services like utility payments and money transfers, while empowering them to pay via Paytm Payment Instruments (PPI) like Paytm Wallet, Paytm UPI, Paytm Payments Bank Net banking, Paytm FASTag and Paytm Postpaid - Buy Now, Pay Later. To merchants, Paytm offers acquiring devices like Soundbox, EDC, QR and Payment Gateway where payment aggregation is done through PPI and also other banks financial instruments. To further enhance merchants business, Paytm offers merchants commerce services through advertising and Paytm Mini app store. Operating on this platform leverage, the company then offers credit services such as merchant loans, personal loans and BNPL, sourced by its financial partners. About the team: Paytm Ads is digital advertising vertical that offers innovative ad solutions to clients across industries It offers advertisers the opportunity to engage with 300Mn+ users who interact with over 200 payment; retail services, online and offline - offered on the Paytm app. Paytm Ads maps the user transactions to their lifestyle choices and creates customized segmentation cohorts for sharp shooting ad campaigns to the most relevant TG. Expectations/ Requirements 1.Proficient in SQL/Hive and deep expertise in building scalable business reporting solutions 2. Past experience in optimizing business strategy, product or process using data & analytics 3. Working knowledge in at least one programming language like Scala, Java or Python 4. Working knowledge of Dashboard visualization. Ability to execute cross functional initiatives. 5. Maintaining product & funnel dashboard7.s, metrics on pulse, looker, superset 6.Campaign analytics and debugs 7.Data reporting for business asks, MBR, Lucky wheel revenue, growth experiments Superpowers/ Skills that will help you succeed in this role 1. 5 to 9 years of work experience in a business intelligence and analytics role in financial services, e-commerce, consulting or technology domain 2. Demonstrated ability to directly partner with business owners to understand product requirements 3. Effective spoken and written communication to senior audiences, including strong data presentation and visualization skills 4. Prior success in working with extremely large datasets using big data technologies 5. Detail-oriented, with an aptitude for solving unstructured problems Why join us -A collaborative output driven program that brings cohesiveness across businesses through technology -A solid 360 feedbacks from your peer teams on your support of their goals With enviable 500 mn+ registered users, 21 mn+ merchants and depth of data in our ecosystem, we are in a unique position to democratize credit for deserving consumers & merchants- and we are committed to it. Indias largest digital lending story is brewing here. Its your opportunity to be a part of the story.

Posted 3 weeks ago

Apply

2.0 - 6.0 years

25 - 30 Lacs

Hyderabad

Work from Office

Job Title - Retail Specialized Data Scientist Level 9 SnC GN Data & AI Management Level:09 - Consultant Location:Bangalore / Gurgaon / Mumbai / Chennai / Pune / Hyderabad / Kolkata Must have skills: A solid understanding of retail industry dynamics, including key performance indicators (KPIs) such as sales trends, customer segmentation, inventory turnover, and promotions. Strong ability to communicate complex data insights to non-technical stakeholders, including senior management, marketing, and operational teams. Meticulous in ensuring data quality, accuracy, and consistency when handling large, complex datasets. Gather and clean data from various retail sources, such as sales transactions, customer interactions, inventory management, website traffic, and marketing campaigns. Strong proficiency in Python for data manipulation, statistical analysis, and machine learning (libraries like Pandas, NumPy, Scikit-learn). Expertise in supervised and unsupervised learning algorithms Use advanced analytics to optimize pricing strategies based on market demand, competitor pricing, and customer price sensitivity. Good to have skills: Familiarity with big data processing platforms like Apache Spark, Hadoop, or cloud-based platforms such as AWS or Google Cloud for large-scale data processing. Experience with ETL (Extract, Transform, Load) processes and tools like Apache Airflow to automate data workflows. Familiarity with designing scalable and efficient data pipelines and architecture. Experience with tools like Tableau, Power BI, Matplotlib, and Seaborn to create meaningful visualizations that present data insights clearly. Job Summary : The Retail Specialized Data Scientist will play a pivotal role in utilizing advanced analytics, machine learning, and statistical modeling techniques to help our retail business make data-driven decisions. This individual will work closely with teams across marketing, product management, supply chain, and customer insights to drive business strategies and innovations. The ideal candidate should have experience in retail analytics and the ability to translate data into actionable insights. Roles & Responsibilities: Leverage Retail Knowledge:Utilize your deep understanding of the retail industry (merchandising, customer behavior, product lifecycle) to design AI solutions that address critical retail business needs. Gather and clean data from various retail sources, such as sales transactions, customer interactions, inventory management, website traffic, and marketing campaigns. Apply machine learning algorithms, such as classification, clustering, regression, and deep learning, to enhance predictive models. Use AI-driven techniques for personalization, demand forecasting, and fraud detection. Use advanced statistical methods help optimize existing use cases and build new products to serve new challenges and use cases. Stay updated on the latest trends in data science and retail technology. Collaborate with executives, product managers, and marketing teams to translate insights into business actions. Professional & Technical Skills : Strong analytical and statistical skills. Expertise in machine learning and AI. Experience with retail-specific datasets and KPIs. Proficiency in data visualization and reporting tools. Ability to work with large datasets and complex data structures. Strong communication skills to interact with both technical and non-technical stakeholders. A solid understanding of the retail business and consumer behavior. Programming Languages:Python, R, SQL, Scala Data Analysis Tools:Pandas, NumPy, Scikit-learn, TensorFlow, Keras Visualization Tools:Tableau, Power BI, Matplotlib, Seaborn Big Data Technologies:Hadoop, Spark, AWS, Google Cloud Databases:SQL, NoSQL (MongoDB, Cassandra) Additional Information: - Qualification Experience: Minimum 3 year(s) of experience is required Educational Qualification: Bachelors or Master's degree in Data Science, Statistics, Computer Science, Mathematics, or a related field.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

8 - 12 Lacs

Kochi

Work from Office

Job Title - + + Management Level: Location:Kochi, Coimbatore, Trivandrum Must have skills:Big Data, Python or R Good to have skills:Scala, SQL Job Summary A Data Scientist is expected to be hands-on to deliver end to end vis a vis projects undertaken in the Analytics space. They must have a proven ability to drive business results with their data-based insights. They must be comfortable working with a wide range of stakeholders and functional teams. The right candidate will have a passion for discovering solutions hidden in large data sets and working with stakeholders to improve business outcomes. Roles and Responsibilities Identify valuable data sources and collection processes Supervise preprocessing of structured and unstructured data Analyze large amounts of information to discover trends and patterns for insurance industry. Build predictive models and machine-learning algorithms Combine models through ensemble modeling Present information using data visualization techniques Collaborate with engineering and product development teams Hands-on knowledge of implementing various AI algorithms and best-fit scenarios Has worked on Generative AI based implementations Professional and Technical Skills 3.5-5 years experience in Analytics systems/program delivery; at least 2 Big Data or Advanced Analytics project implementation experience Experience using statistical computer languages (R, Python, SQL, Pyspark, etc.) to manipulate data and draw insights from large data sets; familiarity with Scala, Java or C++ Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and their real-world advantages/drawbacks Knowledge of advanced statistical techniques and concepts (regression, properties of distributions, statistical tests and proper usage, etc.) and experience with applications Hands on experience in Azure/AWS analytics platform (3+ years) Experience using variations of Databricks or similar analytical applications in AWS/Azure Experience using business intelligence tools (e.g. Tableau) and data frameworks (e.g. Hadoop) Strong mathematical skills (e.g. statistics, algebra) Excellent communication and presentation skills Deploying data pipelines in production based on Continuous Delivery practices. Additional Information Multi Industry domain experience Expert in Python, Scala, SQL Knowledge of Tableau/Power BI or similar self-service visualization tools Interpersonal and Team skills should be top notch Nice to have leadership experience in the past Qualification Experience:3.5 -5 years of experience is required Educational Qualification:Graduation (Accurate educational details should capture)

Posted 3 weeks ago

Apply

5.0 - 8.0 years

4 - 7 Lacs

Mumbai

Work from Office

Excellent Knowledge on Spark; The professional must have a thorough understanding Spark framework, Performance Tuning etc Excellent Knowledge and hands-on experience of at least 4+ years in Scala and PySpark Excellent Knowledge of the Hadoop eco System- Knowledge of Hive mandatory Strong Unix and Shell Scripting Skills Excellent Inter-personal skills and for experienced candidates Excellent leadership skills Mandatory for anyone to have Good knowledge of any of the CSPs like Azure,AWS or GCP; Certifications on Azure will be additional Plus. Mandatory Skills: PySpark. Experience: 5-8 Years.

Posted 3 weeks ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Mumbai

Work from Office

> 5+ years of professional experience with SQL, ETL, data modeling, and at least one programming language (e.g., Python, C++, C#, Scala, etc.) 3+ years of professional experience with Enterprise Domains Like HR, Finance, Supply Chain 4+ years of professional experience with more than one SQL and relational databases including expertise in Presto, Spark, and MySQL Professional experience designing and implementing real-time pipelines (Apache Kafka, or similar technologies) 5+ years of professional experience in custom ETL design, implementation, and maintenance 3+ years of professional experience with Data Modeling including expertise in Data Warehouse design and Dimensional Modeling 5+ years of professional experience working with cloud or on-premises Big Data/MPP analytics platform (Teradata, AWS Redshift, Google BigQuery, Azure Synapse Analytics, or similar) Experience with data quality and validation (using Apache Airflow) Experience with anomaly/outlier detection Experience with Data Science workflow (Jupyter Notebooks, Bento, or similar tools) Experience with Airflow or similar workflow management systems Experience querying massive datasets using Spark, Presto, Hive, or similar Experience building systems integrations, tooling interfaces, implementing integrations for ERP systems (Oracle, SAP, Salesforce, etc.) Experience in data visualizations using Power BI and Tableau. Proficiency in Python programming language and Python libraries, with a focus on data engineering and data science applications. Professional fluency in English required Mandatory Skills: Data Analysis. Experience: 3-5 Years. >

Posted 3 weeks ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Hyderabad

Work from Office

> Long Description Bachelors Degree preferred, or equivalent combination of education, training, and experience. 5+ years of professional experience with SQL, ETL, data modeling, and at least one programming language (e.g., Python, C++, C#, Scala, etc.) 3+ years of professional experience with Enterprise Domains Like HR, Finance, Supply Chain 6+ years of professional experience with more than one SQL and relational databases including expertise in Presto, Spark, and MySQL Professional experience designing and implementing real-time pipelines (Apache Kafka, or similar technologies) 5+ years of professional experience in custom ETL design, implementation, and maintenance 3+ years of professional experience with Data Modeling including expertise in Data Warehouse design and Dimensional Modeling 5+ years of professional experience working with cloud or on-premises Big Data/MPP analytics platform (Teradata, AWS Redshift, Google BigQuery, Azure Synapse Analytics, or similar) Experience with data quality and validation (using Apache Airflow) Experience with anomaly/outlier detection Experience with Data Science workflow (Jupyter Notebooks, Bento, or similar tools) Experience with Airflow or similar workflow management systems Experience querying massive datasets using Spark, Presto, Hive, or similar Experience building systems integrations, tooling interfaces, implementing integrations for ERP systems (Oracle, SAP, Salesforce, etc.) Experience in data visualizations using Power BI and Tableau. Proficiency in Python programming language and Python libraries, with a focus on data engineering and data science applications. Professional fluency in English required Mandatory Skills: Data Analysis. Experience: 5-8 Years. >

Posted 3 weeks ago

Apply

5.0 - 9.0 years

9 - 13 Lacs

Gurugram

Work from Office

At Capgemini Invent, we believe difference drives change. As inventive transformation consultants, we blend our strategic, creative and scientific capabilities,collaborating closely with clients to deliver cutting-edge solutions. Join us to drive transformation tailored to our client"s challenges of today and tomorrow.Informed and validated by science and data. Superpowered by creativity and design. All underpinned by technology created with purpose. Your role As a Senior Data Scientist, you are expected to develop and implement Artificial Intelligence based solutions across various disciplines for the Intelligent Industry vertical of Capgemini Invent. You are expected to work as an individual contributor or along with a team to help design and develop ML/NLP models as per the requirement. You will work closely with the Product Owner, Systems Architect and other key stakeholders right from conceptualization till the implementation of the project. You should take ownership while understanding the client requirement, the data to be used, security & privacy needs and the infrastructure to be used for the development and implementation. The candidate will be responsible for executing data science projects independently to deliver business outcomes and is expected to demonstrate domain expertise, develop, and execute program plans and proactively solicit feedback from stakeholders to identify improvement actions. This role requires a strong technical background, excellent problem-solving skills, and the ability to work collaboratively with stakeholders from different functional and business teams. The role also requires the candidate to collaborate on ML asset creation and eager to learn and impart trainings to fellow data science professionals. We expect thought leadership from the candidate, especially on proposing to build a ML/NLP asset based on expected industry requirements. Experience in building Industry specific (e.g. Manufacturing, R&D, Supply Chain, Life Sciences etc), production ready AI Models using microservices and web-services is a plus. Programming Languages Python NumPy, SciPy, Pandas, MatPlotLib, Seaborne Databases RDBMS (MySQL, Oracle etc.), NoSQL Stores (HBase, Cassandra etc.) ML/DL Frameworks SciKitLearn, TensorFlow (Keras), PyTorch, Big data ML Frameworks - Spark (Spark-ML, Graph-X), H2O Cloud Azure/AWS/GCP Your Profile Predictive and Prescriptive modelling using Statistical and Machine Learning algorithms including but not limited to Time Series, Regression, Trees, Ensembles, Neural-Nets (Deep & Shallow CNN, LSTM, Transformers etc.). Experience with open-source OCR engines like Tesseract, Speech recognition, Computer Vision, face recognition, emotion detection etc. is a plus. Unsupervised learning Market Basket Analysis, Collaborative Filtering, Dimensionality Reduction, good understanding of common matrix decomposition approaches like SVD. Various Clustering approaches Hierarchical, Centroid-based, Density-based, Distribution-based, Graph-based clustering like Spectral. NLP Information Extraction, Similarity Matching, Sentiment Analysis, Text Clustering, Semantic Analysis, Document Summarization, Context Mapping/Understanding, Intent Classification, Word Embeddings, Vector Space Models, experience with libraries like NLTK, Spacy, Stanford Core-NLP is a plus. Usage of Transformers for NLP and experience with LLMs like (ChatGPT, Llama) and usage of RAGs (vector stores like LangChain & LangGraps), building Agentic AI applications. Model Deployment ML pipeline formation, data security and scrutiny check and ML-Ops for productionizing a built model on-premises and on cloud. Required Qualifications Masters degree in a quantitative field such as Mathematics, Statistics, Machine Learning, Computer Science or Engineering or a bachelors degree with relevant experience. Good experience in programming with languages such as Python/Java/Scala, SQL and experience with data visualization tools like Tableau or Power BI. Preferred Experience Experienced in Agile way of working, manage team effort and track through JIRA Experience in Proposal, RFP, RFQ and pitch creations and delivery to the big forum. Experience in POC, MVP, PoV and assets creations with innovative use cases Experience working in a consulting environment is highly desirable. Presupposition High Impact client communication The job may also entail sitting as well as working at a computer for extended periods of time. Candidates should be able to effectively communicate by telephone, email, and face to face. What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI.

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies