Home
Jobs

2830 Scala Jobs - Page 32

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Area(s) of responsibility About The Role We are seeking a highly skilled and motivated Senior Data Engineer with strong expertise in Scala and Google BigQuery to join our growing data team. You will be responsible for designing, building, and maintaining scalable data pipelines and analytics solutions that power business insights and decision-making. Key Responsibilities Design and implement robust, scalable data pipelines using Scala and BigQuery. Develop ETL/ELT processes to ingest, transform, and store large volumes of structured and semi-structured data. Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver high-quality solutions. Optimize BigQuery queries and manage cost-effective data processing. Ensure data quality, integrity, and governance across all data systems Required Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. 5+ years of experience in data engineering or software development. Strong proficiency in Scala, including functional programming concepts. Hands-on experience with Google BigQuery and GCP ecosystem. Familiarity with CI/CD practices and version control (e.g., Git). Show more Show less

Posted 1 week ago

Apply

3.0 - 6.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Morgan Stanley Model Risk Tool Validation - Risk & Capital - Associate Profile Description We’re seeking someone to join our team as a [Associate] to [Model Risk Tool Validation - Risk & Capital, Model Risk] Firm Risk Management In the Firm Risk Management division, we advise businesses across the Firm on risk mitigation strategies, develop tools to analyze and monitor risks and lead key regulatory initiatives. Company Profile Morgan Stanley is an industry leader in financial services, known for mobilizing capital to help governments, corporations, institutions, and individuals around the world achieve their financial goals. Since 1935, Morgan Stanley is known as a global leader in financial services, always evolving and innovating to better serve our clients and our communities in more than 40 countries around the world. What You’ll Do In The Role The primary responsibilities of the role include, but are not limited to the following: Provide independent review and validation compliant with MRM policies and procedures, regulatory guidance and industry leading practices, including evaluating conceptual soundness, quality of model / tool methodology, model / tool limitations, data quality, and on-going monitoring of model/ tool performance Take initiatives and responsibility of end-to-end delivery of a stream of Model / Tool Validation and related Risk Management deliverables Write Model / Tool Review findings in validation documents that could be used for presentations both internally (tool developers, business unit managers, Audit, various global Committees) as well as externally (Regulators) Verbally communicate results and debate issues, challenges and methodologies with internal audiences including senior management Represent MRM team in interactions with regulatory and audit agencies as and when required Follow financial markets & business trends on a frequent basis to enhance the quality of Model / Tool Validation and related Risk Management deliverables What You’ll Bring To The Role Qualifications Skills required (essential / preferred) Masters or Doctorate degree in a quantitative discipline such as Statistics, Mathematics, Physics, Computer Science or Engineering is essential Experience in a Quant role in validation or development of Issuer Default Loss Model or in a technical role in Financial institutions e.g. Developer, is preferred Strong written & verbal communication skills including debating different viewpoints and making formal presentations of complex topics to a wider audience is preferred 3-6 years of relevant work experience in a Model / Tool Validation role in a bank or financial institution Proficient programmer in Python ; knowledge of other programming languages like R, Scala, MATLAB etc. is preferred Willingness to learn new and complex topics and adapt oneself (continuous learning) is preferred Working knowledge of statistical techniques, quantitative finance and programming is essential; good understanding of various complex financial instruments is preferred Knowledge of Basel Regulations and various Regulatory Exams (CCAR, ICAAP etc) is preferred Relevant professional certifications like CQF, CFA or progress made towards it are preferred Desire to work in a dynamic, team-oriented, fast-paced environment focusing on challenging tasks mixing fundamental, quantitative, and market-oriented knowledge and skills is essential What You Can Expect From Morgan Stanley We are committed to maintaining the first-class service and high standard of excellence that have defined Morgan Stanley for over 89 years. Our values - putting clients first, doing the right thing, leading with exceptional ideas, committing to diversity and inclusion, and giving back - aren’t just beliefs, they guide the decisions we make every day to do what's best for our clients, communities and more than 80,000 employees in 1,200 offices across 42 countries. At Morgan Stanley, you’ll find an opportunity to work alongside the best and the brightest, in an environment where you are supported and empowered. Our teams are relentless collaborators and creative thinkers, fueled by their diverse backgrounds and experiences. We are proud to support our employees and their families at every point along their work-life journey, offering some of the most attractive and comprehensive employee benefits and perks in the industry. There’s also ample opportunity to move about the business for those who show passion and grit in their work. Morgan Stanley is an equal opportunities employer. We work to provide a supportive and inclusive environment where all individuals can maximize their full potential. Our skilled and creative workforce is comprised of individuals drawn from a broad cross section of the global communities in which we operate and who reflect a variety of backgrounds, talents, perspectives, and experiences. Our strong commitment to a culture of inclusion is evident through our constant focus on recruiting, developing, and advancing individuals based on their skills and talents. Show more Show less

Posted 1 week ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Nagpur, Pune

Work from Office

Naukri logo

We are looking for a skilled Data Engineer to design, build, and manage scalable data pipelines and ensure high-quality, secure, and reliable data infrastructure across our cloud and on-prem platforms.

Posted 1 week ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Nagpur, Pune, Gurugram

Work from Office

Naukri logo

We are looking for a skilled Data Engineer to design, build, and manage scalable data pipelines and ensure high-quality, secure, and reliable data infrastructure across our cloud and on-prem platforms.

Posted 1 week ago

Apply

3.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

The HiLabs Story HiLabs is a leading provider of AI-powered solutions to clean dirty data, unlocking its hidden potential for healthcare transformation. HiLabs is committed to transforming the healthcare industry through innovation, collaboration, and a relentless focus on improving patient outcomes. HiLabs Team Multidisciplinary industry leaders Healthcare domain experts AI/ML and data science experts Professionals hailing from the worlds best universities, business schools, and engineering institutes including Harvard, Yale, Carnegie Mellon, Duke, Georgia Tech, Indian Institute of Management (IIM), and Indian Institute of Technology (IIT). Be a part of a team that harnesses advanced AI, ML, and big data technologies to develop cutting-edge healthcare technology platform, delivering innovative business solutions. Job Title : Data Engineer I/II Job Location : Bangalore, Karnataka , India Job summary: We are a leading Software as a Service (SaaS) company that specializes in the transformation of data in the US healthcare industry through cutting-edge Artificial Intelligence (AI) solutions. We are looking for Software Developers, who should continually strive to advance engineering excellence and technology innovation. The mission is to power the next generation of digital products and services through innovation, collaboration, and transparency. You will be a technology leader and doer who enjoys working in a dynamic, fast-paced environment. Responsibilities Design, develop, and maintain robust and scalable ETL/ELT pipelines to ingest and transform large datasets from various sources. Optimize and manage databases (SQL/NoSQL) to ensure efficient data storage, retrieval, and manipulation for both structured and unstructured data. Collaborate with data scientists, analysts, and engineers to integrate data from disparate sources and ensure smooth data flow between systems. Implement and maintain data validation and monitoring processes to ensure data accuracy, consistency, and availability. Automate repetitive data engineering tasks and optimize data workflows for performance and scalability. Work closely with cross-functional teams to understand their data needs and provide solutions that help scale operations. Ensure proper documentation of data engineering processes, workflows, and infrastructure for easy maintenance and scalability Desired Profile Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field. 3-5 years of hands-on experience as a Data Engineer or in a related data-driven role. Strong experience with ETL tools like Apache Airflow, Talend, or Informatica. Expertise in SQL and NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB, Cassandra). Strong proficiency in Python, Scala, or Java for data manipulation and pipeline development. Experience with cloud-based platforms (AWS, Google Cloud, Azure) and their data services (e.g., S3, Redshift, BigQuery). Familiarity with big data processing frameworks such as Hadoop, Spark, or Flink. Experience in data warehousing concepts and building data models (e.g., Snowflake, Redshift). Understanding of data governance, data security best practices, and data privacy regulations (e.g., GDPR, HIPAA). Familiarity with version control systems like Git.. HiLabs is an equal opportunity employer (EOE). No job applicant or employee shall receive less favorable treatment or be disadvantaged because of their gender, marital or family status, color, race, ethnic origin, religion, disability, or age; nor be subject to less favorable treatment or be disadvantaged on any other basis prohibited by applicable law. HiLabs is proud to be an equal opportunity workplace dedicated to pursuing and hiring a diverse and inclusive workforce to support individual growth and superior business results. Thank you for reviewing this opportunity with HiLabs! If this position appears to be a good fit for your skillset, we welcome your application. HiLabs Total Rewards Competitive Salary, Accelerated Incentive Policies, H1B sponsorship, Comprehensive benefits package that includes ESOPs, financial contribution for your ongoing professional and personal development, medical coverage for you and your loved ones, 401k, PTOs & a collaborative working environment, Smart mentorship, and highly qualified multidisciplinary, incredibly talented professionals from highly renowned and accredited medical schools, business schools, and engineering institutes. CCPA disclosure notice - https://www.hilabs.com/privacy Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description Translation Services Data and Analytics seeks a passionate Data Engineer to drive innovations in translation analytics space to create the data pipelines handling large volume data and help our customer's to analyze and understand Amazon Translation coverage across the languages.We support Translation Services in making data-driven decisions by providing easy access to data and self-serve analytics. We work closely with internal stakeholders and cross-functional teams to solve business problems through data by building data pipelines, develop automated reporting and dive deep into data to identify actionable root cause. Key job responsibilities Work closely with data scientists and business intelligence engineers to create robust data architectures and pipelines. Develop and manage scalable, automated, and fault-tolerant data solutions. Simplify and enhance the accessibility, clarity, and usability of large or complex datasets through the development of advanced ETL, BI dashboards and applications. Take ownership of the design, creation, and upkeep of metrics, reports, analyses, and dashboards to inform key business decisions. Navigate ambiguous environments by evaluating various options using both data-driven insights and business expertise. A day in the life Data Engineers focus on managing customer requests, maintaining operational excellence, and enhancing core data analytics infrastructure. You will be collaborating closely with both technical and non-technical teams to design and execute roadmaps for essential Translation Services metrics. If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply! Basic Qualifications 3+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines Experience with SQL Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS Preferred Qualifications Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Bachelor's degree Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2929407 Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description As part of the Last Mile Science & Technology organization, you’ll partner closely with Product Managers, Data Scientists, and Software Engineers to drive improvements in Amazon's Last Mile delivery network. You will leverage data and analytics to generate insights that accelerate the scale, efficiency, and quality of the routes we build for our drivers through our end-to-end last mile planning systems. You will develop complex data engineering solutions using AWS technology stack (S3, Glue, IAM, Redshift, Athena). You should have deep expertise and passion in working with large data sets, building complex data processes, performance tuning, bringing data from disparate data stores and programmatically identifying patterns. You will work with business owners to develop and define key business questions and requirements. You will provide guidance and support for other engineers with industry best practices and direction. Analytical ingenuity and leadership, business acumen, effective communication capabilities, and the ability to work effectively with cross-functional teams in a fast-paced environment are critical skills for this role. Key job responsibilities Design, implement, and support data warehouse / data lake infrastructure using AWS big data stack, Python, Redshift, Quicksight, Glue/lake formation, EMR/Spark/Scala, Athena etc. Extract huge volumes of structured and unstructured data from various sources (Relational /Non-relational/No-SQL database) and message streams and construct complex analyses. Develop and manage ETLs to source data from various systems and create unified data model for analytics and reporting Perform detailed source-system analysis, source-to-target data analysis, and transformation analysis Participate in the full development cycle for ETL: design, implementation, validation, documentation, and maintenance. Basic Qualifications 3+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines 4+ years of SQL experience Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS Experience as a data engineer or related specialty (e.g., software engineer, business intelligence engineer, data scientist) with a track record of manipulating, processing, and extracting value from large datasets Preferred Qualifications Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2967540 Show more Show less

Posted 1 week ago

Apply

0 years

6 - 8 Lacs

Hyderābād

On-site

GlassDoor logo

Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose – the relentless pursuit of a world that works better for people – we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant- Databricks Developer ! In this role, the Databricks Developer is responsible for solving the real world cutting edge problem to meet both functional and non-functional requirements. Responsibilities Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have experience in Data Engineering domain . Qualifications we seek in you! Minimum qualifications Bachelor’s Degree or equivalency (CS, CE, CIS, IS, MIS, or engineering discipline) or equivalent work experience. Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have excellent coding skills either Python or Scala, preferably Python. Must have experience in Data Engineering domain . Must have implemented at least 2 project end-to-end in Databricks. Must have at least experience on databricks which consists of various components as below Delta lake dbConnect db API 2.0 Databricks workflows orchestration Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. Must have good understanding to create complex data pipeline Must have good knowledge of Data structure & algorithms. Must be strong in SQL and sprak-sql . Must have strong performance optimization skills to improve efficiency and reduce cost . Must have worked on both Batch and streaming data pipeline . Must have extensive knowledge of Spark and Hive data processing framework. Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB /DynamoDB, ASB/SQS, Cloud databases. Must be strong in writing unit test case and integration test Must have strong communication skills and have worked on the team of size 5 plus Must have great attitude towards learning new skills and upskilling the existing skills. Preferred Qualifications Good to have Unity catalog and basic governance knowledge. Good to have Databricks SQL Endpoint understanding. Good To have CI/CD experience to build the pipeline for Databricks jobs. Good to have if worked on migration project to build Unified data platform. Good to have knowledge of DBT. Good to have knowledge of docker and Kubernetes. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. For more information, visit www.genpact.com . Follow us on Twitter, Facebook, LinkedIn, and YouTube. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Lead Consultant Primary Location India-Hyderabad Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jun 9, 2025, 9:15:16 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time

Posted 1 week ago

Apply

8.0 years

0 - 0 Lacs

Hyderābād

On-site

GlassDoor logo

Job Title: Senior Data Engineer Location: Hyderabad, India (Hybrid Model) Experience Required: 8+ Years Contract Duration: 1 Year Work Mode: Onsite (Hybrid Model) Job Description: We are seeking a highly skilled and experienced Senior Data Engineer for a contractual opportunity (C2C) with a duration of 1 year. The ideal candidate will have a strong background in Airflow, Python, AWS, and Big Data technologies (especially Spark) and be capable of building scalable and efficient data engineering solutions in a hybrid onsite role based in Hyderabad. Mandatory Skills: Apache Airflow: Expertise in workflow orchestration, DAG creation, and managing complex data pipelines. Python: Proficient in writing clean, scalable, and efficient code for ETL and data transformation. AWS: Hands-on experience with core AWS services like S3, Lambda, Glue, Redshift, EMR, and CloudWatch. Big Data (Spark): Strong experience with Spark (PySpark or Scala), handling large datasets and distributed computing. Key Responsibilities: Design, implement, and maintain scalable data pipelines and ETL workflows using Airflow and Python. Develop and manage robust cloud-based solutions using AWS. Collaborate with cross-functional teams including data scientists, analysts, and business stakeholders. Handle large-scale structured and unstructured data processing using Spark and related technologies. Ensure high standards of data quality, security, and governance. Apply CI/CD practices for deployment and maintain monitoring/logging mechanisms. Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Minimum of 8 years of relevant experience in Data Engineering. Strong problem-solving skills and the ability to work in a fast-paced, hybrid environment. Excellent verbal and written communication skills. Preferred Skills: Experience with data cataloging, governance, and lineage tools. Exposure to Docker/Kubernetes for containerized workloads. Knowledge of alternative orchestration tools like Prefect or Luigi is a plus. Contact Us: Email: career@munificentresource.in Call/WhatsApp: +91 90643 63461 Subject Line: Senior Data Engineer (Hyd) . Job Types: Full-time, Contractual / Temporary Contract length: 12 months Pay: ₹75,000.00 - ₹85,000.00 per month Schedule: Day shift Night shift Work Location: In person

Posted 1 week ago

Apply

4.0 years

0 Lacs

Hyderābād

On-site

GlassDoor logo

Minimum qualifications: Bachelor's degree in Computer Science, Engineering, Mathematics, or a related field, or equivalent practical experience. 4 years of experience in developing and troubleshooting data processing algorithms. Experience coding with one or more programming languages (e.g., Java, Python) and Bigdata technologies such as Scala, Spark and hadoop frameworks. Experience with one public cloud provider, such as GCP. Preferred qualifications: Experience architecting, developing software, or internet scale production-grade Big Data solutions in virtualized environments. Experience in Big Data, information retrieval, data mining, or Machine Learning. Experience with data warehouses, technical architectures, infrastructure components, Extract Transform and Load/Extract, Load and Transform and reporting/analytic tools, environments, and data structures. Experience in building multi-tier applications with modern technologies such as NoSQL, MongoDB, SparkML, and TensorFlow. Experience with Infrastructure as Code and Continuous Integration/Continuous Deployment tools like Terraform, Ansible, Jenkins. Understanding one database type, with the ability to write complex SQL queries. About the job The Google Cloud Platform team helps customers transform and build what's next for their business — all with technology built in the cloud. Our products are developed for security, reliability and scalability, running the full stack from infrastructure to applications to devices and hardware. Our teams are dedicated to helping our customers — developers, small and large businesses, educational institutions and government agencies — see the benefits of our technology come to life. As part of an entrepreneurial team in this rapidly growing business, you will play a key role in understanding the needs of our customers and help shape the future of businesses of all sizes use technology to connect with customers, employees and partners. As a Strategic Cloud Data Engineer, you will guide customers on how to ingest, store, process, analyze, and explore/visualize data on the Google Cloud Platform. You will work on data migrations and modernization projects, and with customers to design data processing systems, develop data pipelines optimized for scaling, and troubleshoot potential platform/product challenges. You will have an understanding of data governance and security controls. You will travel to customer sites to deploy solutions and deliver workshops to educate and empower customers. Additionally, you will work with Product Management and Product Engineering teams to build and constantly drive excellence in our products. Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Interact with stakeholders to translate complex customer requirements into recommendations for appropriate solution architectures and advisory services. Engage with technical leads, and partners to lead high velocity migration and modernisation to Google Cloud Platform (GCP). Design, Migrate/Build and Operationalise data storage and processing infrastructure using Cloud native products. Develop and implement data quality and governance procedures to ensure the accuracy and reliability of data. Take various project requirements and organize them into clear goals and objectives, and create a work breakdown structure to manage internal and external stakeholders. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.

Posted 1 week ago

Apply

4.0 years

5 - 8 Lacs

Hyderābād

On-site

GlassDoor logo

Do you love understanding every detail of how new technologies work? Join the team that serves as Apple’s nerve center, our Information Systems and Technology group. There are countless ways you’ll contribute here, whether you’re coordinating technology needs for product launches, designing music solutions for retail locations, or ensuring the strength of in-store Wi-Fi connections. From Apple Pay to the Apple website to our data centers around the globe, you’ll help design and manage the massive systems that countless employees and customers rely on every day. You’ll also build custom tools for employees, empowering them to solve complex problems on their own. Join our team, and together we’ll explore all the ways to improve how Apple operates, freeing our employees to do what they do best: craft magical experiences for our customers. The people here at Apple don’t just build products - we craft the kind of wonder that’s revolutionized entire industries. It’s the diversity of those people and their ideas that supports the innovation that runs through everything we do, from amazing technology to industry-leading environmental efforts. Join Apple, and help us leave the world better than we found it. The Global Business Intelligence team provides data services, analytics, reporting, and data science solutions to Apple’s business groups, including Retail, iTunes, Marketing, AppleCare, Operations, Finance, and Sales. These solutions are built on top of a phenomenal data platform and leverage multiple frameworks. This position is an extraordinary opportunity for a competent, expert, and results-oriented Framework Software Engineer to define and build some of the best-in-class data platforms and products. Description As a Software Engineer, you will be responsible for building various tools and features for Data and ML platforms, including data processing, insights portal, data observability, data lineage, model hub and data visualization. You will either work on building custom solutions ground up or take open source products and customize the same for Apple’s need. We're looking for an individual who loves to take challenges, tackles problems with imaginative solutions, works well in collaborative teams, and can produce high-quality software under tight deadlines and constraints. This role involves building innovative tools and frameworks that can extend the functionality of 3rd party BI tools using APIs. Minimum Qualifications 4+ years hands on experience with Java, Python or Scala Experience in designing and developing scalable micro services and Rest APIs Experience with SQL and NoSQL data stores Experience in building and deploying cloud native applications/products (AWS/GCP/others) Experience using DevOps tools, containers and Kubernetes platform Good communication and personal skills:- ability to interact and work well with members of other functional groups in a project team and a strong sense of project ownership Preferred Qualifications Knowledge of LLM serving and inference frameworks Knowledge of LangChain/LlamaIndex, enabling RAG applications and LLM orchestration Knowledge of Big data technologies and data platforms Knowledge of spark or other distributed computing frameworks Knowledge of SQL query engines like Trino, Hive etc. Experience in javascript libraries, frameworks such as React is a plus Submit CV

Posted 1 week ago

Apply

6.0 years

5 - 9 Lacs

Hyderābād

On-site

GlassDoor logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Apply business standards, processes and requirements to set up automated processes for data transformation, loading, and normalization Build, monitor and maintain highly secure digital pipelines for the transport of clinical data for DRN projects Validate data relationships, mappings and definitions Develop methodology to analyze and load data and to ensure data quality Participate in internal and external client data implementation calls to discuss file data formats and content Monitor and maintain extraction processes to ensure that data feeds remain current and complete Monitor data flows for loading errors and incomplete or incompatible formats Coordinate with team members and clients to ensure on-time delivery and receipt of data files Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: College bachelor’s degree, or equivalent work experience 6+ years of Scala or Python or Spark or SQL 6+ months of SAS or SQL programming experience Experience with complex SQL statements and familiarity with relational databases Health care claims data experience Unix scripting knowledge General software development knowledge (.NET, Java, Oracle, Teradata, HTML, etc.) Familiarity with processing large data sets AWS or Azure cloud services exposure (AWS Glue, Azure Synapse, Azure Data Factory, Data Bricks) Proficient with Microsoft operating systems and Internet browsers Proven ability to work independently Proven solid verbal and written communication skills Proven solid ability to multi-task and prioritize multiple projects at any given time Proven solid analytical and problem-solving skills Proven ability to work within a solid team structure Willing or ability to travel to meet with internal or external customers At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 1 week ago

Apply

2.0 years

7 - 10 Lacs

Hyderābād

On-site

GlassDoor logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Join the EDG team as a Full Stack Software Engineer. The EDG team is responsible for Improve consumer experience by implementing an enterprise device gateway to manage device health signal acquisition, centralize consumer consent, facilitate efficient health signal distribution, and empower UHC with connected insights across the health and wellness ecosystem. The team has a strong and integrated relationship with the product team based on strong collaboration, trust, and partnership. Goals for the team are focused on creating meaningful positive impact for our customers through clear and measurable metrics analysis. Primary Responsibilities: Write high-quality, fault tolerant code; normally 70% Backend and 30% Front-end (though the exact ratio will depend on your interest) Build high-scale systems, libraries, frameworks and create test plans Monitor production systems and provide on-call support Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: BS in Computer Science, Engineering or a related technical role or equivalent experience 2+ years experience with JS libraries and frameworks, such as Angular, React or other 2+ years experience in Scala, Java, or other compiled language Preferred Qualifications: Experience with web design Experience using RESTful APIs and asynchronous JS Experience in design and development Testing experience with Scala or Java Database and caching experience, SQL and NoSQL (Postgres, Elasticsearch, or MongoDB) Proven interest in learning Scala At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

Delhi

On-site

GlassDoor logo

Bangalore/ Delhi Data / Full Time / Hybrid What is Findem: Findem is the only talent data platform that combines 3D data with AI. It automates and consolidates top-of-funnel activities across your entire talent ecosystem, bringing together sourcing, CRM, and analytics into one place. Only 3D data connects people and company data over time - making an individual’s entire career instantly accessible in a single click, removing the guesswork, and unlocking insights about the market and your competition no one else can. Powered by 3D data, Findem’s automated workflows across the talent lifecycle are the ultimate competitive advantage. Enabling talent teams to deliver continuous pipelines of top, diverse candidates while creating better talent experiences, Findem transforms the way companies plan, hire, and manage talent. Learn more at www.findem.ai Experience - 5 - 9 years We are looking for an experienced Big Data Engineer, who will be responsible for building, deploying and managing various data pipelines, data lake and Big data processing solutions using Big data and ETL technologies. Location- Delhi Hybrid- 3 days onsite We are looking for an experienced Big Data Engineer, who will be responsible for building, deploying and managing various data pipelines, data lake and Big data processing solutions using Big data and ETL technologies. RESPONSIBILITIES Build data pipelines, Big data processing solutions and data lake infrastructure using various Big data and ETL technologies Assemble and process large, complex data sets that meet functional non-functional business requirements ETL from a wide variety of sources like MongoDB, S3, Server-to-Server, Kafka etc., and processing using SQL and big data technologies Build analytical tools to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics Build interactive and ad-hoc query self-serve tools for analytics use cases Build data models and data schema for performance, scalability and functional requirement perspective Build processes supporting data transformation, metadata, dependency and workflow management Research, experiment and prototype new tools/technologies and make them successful SKILL REQUIREMENTS Must have-Strong in Python/Scala Must have experience in Big data technologies like Spark, Hadoop, Athena / Presto, Redshift, Kafka etc Experience in various file formats like parquet, JSON, Avro, orc etc Experience in workflow management tools like airflow Experience with batch processing, streaming and message queues Any of visualization tools like Redash, Tableau, Kibana etc Experience in working with structured and unstructured data sets Strong problem solving skills Good to have - Exposure to NoSQL like MongoDB Exposure to Cloud platforms like AWS, GCP, etc Exposure to Microservices architecture Exposure to Machine learning techniques The role is full-time and comes with full benefits. We are globally headquartered in the San Francisco Bay Area with our India headquarters in Bengaluru. Equal Opportunity As an equal opportunity employer, we do not discriminate on the basis of race, color, religion, national origin, age, sex (including pregnancy), physical or mental disability, medical condition, genetic information, gender identity or expression, sexual orientation, marital status, protected veteran status or any other legally-protected characteristic.

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

Linkedin logo

Scala and Spark hands-on. Quantexa certification and experience of Quantexa project delivery. Quantexa ETL Quantexa Scoring Strong Design and Development exposure Strong communication skills, both written and verbal. Highly motivated self-driven with a positive attitude. Basic understanding of MS Azure. CI/CD pipelines using Jenkins. Kubernetes. Show more Show less

Posted 1 week ago

Apply

0 years

4 - 4 Lacs

Gurgaon

On-site

GlassDoor logo

Who We Are: At VML, we are a beacon of innovation and growth in an ever-evolving world. Our heritage is built upon a century of combined expertise, where creativity meets technology, and diverse perspectives ignite inspiration. With the merger of VMLY&R and Wunderman Thompson, we have forged a new path as a growth partner that is part creative agency, part consultancy, and part technology powerhouse. Our global family now encompasses over 30,000 employees across 150+ offices in 64 markets, each contributing to a culture that values connection, belonging, and the power of differences. Our expertise spans the entire customer journey, offering deep insights in communications, commerce, consultancy, CRM, CX, data, production, and technology. We deliver end-to-end solutions that result in revolutionary work. The opportunity: Hybris Developer required to work in an experienced team of software architects and developers, to be responsible for the design, development and testing of quality code to meet customer driven specifications. What you'll be doing: Working within a project team, to deliver high quality code to deadlines. To clearly communicate what is to be done and the milestones achieved to those within the project in an agreed manner. To realistically estimate your own delivery timescales. To solve problems posed using the tools and materials provided or to suggest alternatives where appropriate. To create robust solutions using the tools and materials provided or to suggest alternatives where appropriate Take some responsibility from and can deputise for the senior/lead developer. Lead small technical discussions on problem solving and solution design. To mentor junior developers. To be a motivated self-starter. What we want from you: Extensive Hybris development experience (ideally 2011 +) Extensive experience coding in Java language (Java17 +) Experience with Data structures Exposure to Web technologies Object oriented software design patterns experience Some understanding of HTML5, CSS and JavaScript Familiarity with Windows or Linux operating system Strong spoken and written communication If you know some of this even better: Experience of delivering software as part of a team Experience of Spring Knowledge of JavaScript, front end technologies Knowledge of other JVM based languages - Groovy, Scala, Clojure Knowledge of one or more scripting languages, such as Groovy, Python Knowledge of webservices technologies such as SOAP, REST, JSON Knowledge of relational database platforms Oracle, SQL Server, MySQL Knowledge of NoSQL database platforms such as Cassandra or MongoDB Knowledge of message queuing systems such as Apache Kafka or RabbitMQ Contributions to open source projects What we can offer you: Alongside the opportunity to work with some of the most exciting brands around the world, we'll also prioritise your career development and help you grow your skills. We'll empower you to make a difference, allow you to be yourself, and respect who you are. Our personality and behaviours: We believe that we are what we do, not just what we say. Our shared values and behaviours show how to bring the VML Enterprise Solution's culture to life through the actions we all take every day: Connect Meaningfully Inspire Creatively Include Purposefully Approach Positively Our brilliant, talented people are what makes VML Enterprise Solutions what we are. That's why we look for people who go beyond and always push our thinking to be better than yesterday. AT VML Enterprise Solutions Our Enterprise Solutions division houses strategic consultants, creative and technical architects and skilled developers and operators that together help some of the world's leading organisations to deliver outstanding digital experiences across all major routes to market worldwide: marketplaces, online retailers, D2C, B2B and social platforms. With over 4,200 experts in 55 operational centres across 34 countries, our capabilities span the entire buying journey from customer acquisition, through engagement, to conversion and loyalty, driving multi-channel growth for world-leading brands. We work with some of the most exciting brands such as The Coca-Cola Company, EY, Bosch, Unilever, Ford, DFS, Mercedes-Benz, Johnson & Johnson, Nestlé, Sainsbury's, Selfridges, Shell and Tiffany & Co. We've built over 500 platforms for brands and retailers and generate in excess of $29bn annually for our clients and work with over 50 strategic partners including Adobe, SAP, Salesforce, HCL, Shopify, Sitecore, BigCommerce, commerce tools and Acquia. Our reputation is based on our people, and we believe we have some of the best in the business. As our business grows internationally, we're looking for new people to join us on our journey to inspire and take a key role in shaping some of the best commerce solutions, services, and websites in the world. Working as a team, no problem is insurmountable; we share in our client's successes and believe that anyone can show creative bravery no matter what their role is in the team. WPP (VML Enterprise Solutions) is an equal opportunity employer and considers applicants for all positions without discrimination or regard to characteristics. We are committed to fostering a culture of respect in which everyone feels they belong and has the same opportunities to progress in their careers. VML is a WPP Agency. For more information, please visit our website, and follow VML on our social channels via Instagram, LinkedIn, and X. When you click "Submit Application", this will send any information you add below to VML. Before you do this, we think it's a good idea to read through our Recruitment Privacy Policy. California residents should read our California Recruitment Privacy Notice. This explains what we do with your personal data when you apply for a role with us, and, how you can update the information you have provided us with or how to remove it.

Posted 1 week ago

Apply

0 years

4 - 4 Lacs

Gurgaon

On-site

GlassDoor logo

Who We Are: At VML, we are a beacon of innovation and growth in an ever-evolving world. Our heritage is built upon a century of combined expertise, where creativity meets technology, and diverse perspectives ignite inspiration. With the merger of VMLY&R and Wunderman Thompson, we have forged a new path as a growth partner that is part creative agency, part consultancy, and part technology powerhouse. Our global family now encompasses over 30,000 employees across 150+ offices in 64 markets, each contributing to a culture that values connection, belonging, and the power of differences. Our expertise spans the entire customer journey, offering deep insights in communications, commerce, consultancy, CRM, CX, data, production, and technology. We deliver end-to-end solutions that result in revolutionary work. The opportunity: SAP Commerce (Hybris) Developer required to work in an experienced team of software architects and developers, to be responsible for the design, development and testing of quality code to meet customer driven specifications. What you'll be doing: Working within a project team, to deliver high quality code to deadlines. Guide and instruct other developers in delivery high quality and robust SAP Commerce solution. To clearly communicate what is to be done and the milestones achieved to those within the project in an agreed manner. To realistically estimate team's delivery timescales. To solve problems posed using the tools and materials provided or to suggest alternatives where appropriate. To create robust solutions using the tools and materials provided or to suggest alternatives where appropriate Take responsibility from and can deputise for the SAP Commerce architect. Lead technical discussions on problem solving and solution design. To mentor junior developers. To be a motivated self-starter. What we want from you: Extensive Hybris development experience (ideally 2011 +) Extensive experience coding in Java language (Java17 +) Experience guiding 3 or more SAP Commerce developers. Experience working on retail domain. Experience with Data structures Exposure to Web technologies Object oriented software design patterns experience Some understanding of HTML5, CSS and JavaScript Familiarity with Windows or Linux operating system Strong spoken and written communication If you know some of this even better: Experience of delivering software as part of a team Experience of Spring Knowledge of JavaScript, front end technologies Knowledge of other JVM based languages - Groovy, Scala, Clojure Knowledge of one or more scripting languages, such as Groovy, Python Knowledge of webservices technologies such as SOAP, REST, JSON Knowledge of relational database platforms Oracle, SQL Server, MySQL Knowledge of NoSQL database platforms such as Cassandra or MongoDB Knowledge of message queuing systems such as Apache Kafka or RabbitMQ Contributions to open source projects What we can offer you: Alongside the opportunity to work with some of the most exciting brands around the world, we'll also prioritise your career development and help you grow your skills. We'll empower you to make a difference, allow you to be yourself, and respect who you are. Our personality and behaviours: We believe that we are what we do, not just what we say. Our shared values and behaviours show how to bring the VML Enterprise Solution's culture to life through the actions we all take every day: Connect Meaningfully Inspire Creatively Include Purposefully Approach Positively Our brilliant, talented people are what makes VML Enterprise Solutions what we are. That's why we look for people who go beyond and always push our thinking to be better than yesterday. AT VML Enterprise Solutions Our Enterprise Solutions division houses strategic consultants, creative and technical architects and skilled developers and operators that together help some of the world's leading organisations to deliver outstanding digital experiences across all major routes to market worldwide: marketplaces, online retailers, D2C, B2B and social platforms. With over 4,200 experts in 55 operational centres across 34 countries, our capabilities span the entire buying journey from customer acquisition, through engagement, to conversion and loyalty, driving multi-channel growth for world-leading brands. We work with some of the most exciting brands such as The Coca-Cola Company, EY, Bosch, Unilever, Ford, DFS, Mercedes-Benz, Johnson & Johnson, Nestlé, Sainsbury's, Selfridges, Shell and Tiffany & Co. We've built over 500 platforms for brands and retailers and generate in excess of $29bn annually for our clients and work with over 50 strategic partners including Adobe, SAP, Salesforce, HCL, Shopify, Sitecore, BigCommerce, commerce tools and Acquia. Our reputation is based on our people, and we believe we have some of the best in the business. As our business grows internationally, we're looking for new people to join us on our journey to inspire and take a key role in shaping some of the best commerce solutions, services, and websites in the world. Working as a team, no problem is insurmountable; we share in our client's successes and believe that anyone can show creative bravery no matter what their role is in the team. WPP (VML Enterprise Solutions) is an equal opportunity employer and considers applicants for all positions without discrimination or regard to characteristics. We are committed to fostering a culture of respect in which everyone feels they belong and has the same opportunities to progress in their careers. VML is a WPP Agency. For more information, please visit our website, and follow VML on our social channels via Instagram, LinkedIn, and X. When you click "Submit Application", this will send any information you add below to VML. Before you do this, we think it's a good idea to read through our Recruitment Privacy Policy. California residents should read our California Recruitment Privacy Notice. This explains what we do with your personal data when you apply for a role with us, and, how you can update the information you have provided us with or how to remove it.

Posted 1 week ago

Apply

0 years

2 - 8 Lacs

Gurgaon

On-site

GlassDoor logo

Gurgaon,Haryana,India Job ID 767286 Join our Team About this opportunity: At Ericsson, we are offering a fantastic opportunity for a passionate and motivated Solution Architect to join our dynamic and diverse team. In this role, you will contribute to the design, construction, and management of Ericsson-based solutions. Familiarity with big data technologies, agile methodology and practices constitutes an integral part of the role. What you will do: Managing the overall operations of multiple solutions deployed within the customer environment. Customer engagement is essential to secure agreements on the proposed solutions. Prepare technical presentations, proposals, and conduct walkthroughs with customers. Lead the technical risk analysis and assist the Program Manager/Program Director in the overall risk analysis process. Manage internal and external stakeholders to identify and bridge gaps. Identify New Business Opportunities. Leading the delivery team by assigning tasks and reviewing progress. Lead User Acceptance Testing (UAT) for the Customer. Managing the L1, L2, L3, and CNS (Support) teams, as well as the customer's Operations and Maintenance (O&M) team. Identify scope creep and change requests during the delivery phase. Support Pre-Sales Activities Prepare Effort Estimation Lead Customer Presentations and Demonstrations Interface with third-party providers (3PP) and original equipment manufacturers (OEMs) to evaluate and integrate their solutions into Ericsson's offerings. Act as a Solution Lifecycle Manager for the proposed or implemented solution. Proactively develop competence in new solution areas within the domain and technologies. Mentor solution integrators, developers, and system architects, providing a transparent and open environment for growth and development. The skills you bring: Experience in architecting Large Size Products, Micro Service Architecture, Database Models Strong Experience in Development within the NMS/EMS Telecom Domain Understanding OSS/NMS-Related Standards Understanding and Experience in Telecommunications Technologies Experience in network management concepts, including inventory management, fault management, performance management, and configuration management. Experience with Network Management Protocols, including SNMP, XML, REST/JSON, TL1, and ASCII. Experience in Software Development Life Cycle Must be proficient in software architecture, application design, development, and implementation using the technologies below-  Programming & Scripting -Java, Java Scripts, Shell, Python  Big Data – Apache Spark, Scala  Microservices  CI/CD  Containerization/Docker  Database -Postgres, MySQL, Cassandra, Mongo Db, Elastic Search.  Tools-Git, Maven, Gradle, Docker, Jenkins, JMeter, JIRA Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply?

Posted 1 week ago

Apply

4.0 - 7.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Introduction In this role, you will work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. A career in IBM Consulting embraces long-term relationships and close collaboration with clients across the globe. You will collaborate with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio, including IBM Software and Red Hat. Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you will be supported by mentors and coaches who will encourage you to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground-breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and learning opportunities in an environment that embraces your unique skills and experience Your Role And Responsibilities A Data Engineer specializing in enterprise data platforms, experienced in building, managing, and optimizing data pipelines for large-scale environments. Having expertise in big data technologies, distributed computing, data ingestion, and transformation frameworks. Proficient in Apache Spark, PySpark, Kafka, and Iceberg tables, and understand how to design and implement scalable, high-performance data processing solutions.What you’ll do: As a Data Engineer – Data Platform Services, responsibilities include: Data Ingestion & Processing Designing and developing data pipelines to migrate workloads from IIAS to Cloudera Data Lake. Implementing streaming and batch data ingestion frameworks using Kafka, Apache Spark (PySpark). Working with IBM CDC and Universal Data Mover to manage data replication and movement. Big Data & Data Lakehouse Management Implementing Apache Iceberg tables for efficient data storage and retrieval. Managing distributed data processing with Cloudera Data Platform (CDP). Ensuring data lineage, cataloging, and governance for compliance with Bank/regulatory policies. Optimization & Performance Tuning Optimizing Spark and PySpark jobs for performance and scalability. Implementing data partitioning, indexing, and caching to enhance query performance. Monitoring and troubleshooting pipeline failures and performance bottlenecks. Security & Compliance Ensuring secure data access, encryption, and masking using Thales CipherTrust. Implementing role-based access controls (RBAC) and data governance policies. Supporting metadata management and data quality initiatives. Collaboration & Automation Working closely with Data Scientists, Analysts, and DevOps teams to integrate data solutions. Automating data workflows using Airflow and implementing CI/CD pipelines with GitLab and Sonatype Nexus. Supporting Denodo-based data virtualization for seamless data access Preferred Education Master's Degree Required Technical And Professional Expertise 4-7 years of experience in big data engineering, data integration, and distributed computing. Strong skills in Apache Spark, PySpark, Kafka, SQL, and Cloudera Data Platform (CDP). Proficiency in Python or Scala for data processing. Experience with data pipeline orchestration tools (Apache Airflow, Stonebranch UDM). Understanding of data security, encryption, and compliance frameworks Preferred Technical And Professional Experience Experience in banking or financial services data platforms. Exposure to Denodo for data virtualization and DGraph for graph-based insights. Familiarity with cloud data platforms (AWS, Azure, GCP). Certifications in Cloudera Data Engineering, IBM Data Engineering, or AWS Data Analytics Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

About The Role Grade Level (for internal use): 11 The Team As a member of the Data Transformation team you will work on building ML powered products and capabilities to power natural language understanding, data extraction, information retrieval and data sourcing solutions for S&P Global Market Intelligence and our clients. You will spearhead development of production-ready AI products and pipelines while leading-by-example in a highly engaging work environment. You will work in a (truly) global team and encouraged for thoughtful risk-taking and self-initiative. The Impact The Data Transformation team has already delivered breakthrough products and significant business value over the last 3 years. In this role you will be developing our next generation of new products while enhancing existing ones aiming at solving high-impact business problems. What’s In It For You Be a part of a global company and build solutions at enterprise scale Collaborate with a highly skilled and technically strong team Contribute to solving high complexity, high impact problems Key Responsibilities Build production ready data acquisition and transformation pipelines from ideation to deployment Being a hands-on problem solver and developer helping to extend and manage the data platforms Architect and lead the development of end-to-end data ingestion and processing pipelines to support downstream ML workflows Apply best practices in data modeling and building ETL pipelines (streaming and batch) using cloud-native solutions Mentor junior and mid-level data engineers and provide technical guidance and best practices What We’re Looking For 7-10 years of professional software work experience Expertise in Python and Apache Spark OOP Design patterns, Test-Driven Development and Enterprise System design SQL (any variant, bonus if this is a big data variant) Proficient in optimizing data flows for performance, storage, and cost efficiency Linux OS (e.g. bash toolset and other utilities) Version control system experience with Git, GitHub, or Azure DevOps. Problem-solving and debugging skills Software craftsmanship, adherence to Agile principles and taking pride in writing good code Techniques to communicate change to non-technical people Nice to have Core Java 17+, preferably Java 21+, and associated toolchain DevOps with a keen interest in automation Apache Avro Apache Kafka Kubernetes Cloud expertise (AWS and GCP preferably) Other JVM based languages - e.g. Kotlin, Scala C# - in particular .NET Core What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.2 - Middle Professional Tier II (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 315684 Posted On: 2025-05-20 Location: Gurgaon, Haryana, India Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

About The Role Grade Level (for internal use): 11 The Team As a member of the Data Transformation team you will work on building ML powered products and capabilities to power natural language understanding, data extraction, information retrieval and data sourcing solutions for S&P Global Market Intelligence and our clients. You will spearhead development of production-ready AI products and pipelines while leading-by-example in a highly engaging work environment. You will work in a (truly) global team and encouraged for thoughtful risk-taking and self-initiative. The Impact The Data Transformation team has already delivered breakthrough products and significant business value over the last 3 years. In this role you will be developing our next generation of new products while enhancing existing ones aiming at solving high-impact business problems. What’s In It For You Be a part of a global company and build solutions at enterprise scale Collaborate with a highly skilled and technically strong team Contribute to solving high complexity, high impact problems Key Responsibilities Build production ready data acquisition and transformation pipelines from ideation to deployment Being a hands-on problem solver and developer helping to extend and manage the data platforms Architect and lead the development of end-to-end data ingestion and processing pipelines to support downstream ML workflows Apply best practices in data modeling and building ETL pipelines (streaming and batch) using cloud-native solutions Mentor junior and mid-level data engineers and provide technical guidance and best practices What We’re Looking For 7-10 years of professional software work experience Expertise in Python and Apache Spark OOP Design patterns, Test-Driven Development and Enterprise System design SQL (any variant, bonus if this is a big data variant) Proficient in optimizing data flows for performance, storage, and cost efficiency Linux OS (e.g. bash toolset and other utilities) Version control system experience with Git, GitHub, or Azure DevOps. Problem-solving and debugging skills Software craftsmanship, adherence to Agile principles and taking pride in writing good code Techniques to communicate change to non-technical people Nice to have Core Java 17+, preferably Java 21+, and associated toolchain DevOps with a keen interest in automation Apache Avro Apache Kafka Kubernetes Cloud expertise (AWS and GCP preferably) Other JVM based languages - e.g. Kotlin, Scala C# - in particular .NET Core What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.2 - Middle Professional Tier II (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 315683 Posted On: 2025-05-20 Location: Gurgaon, Haryana, India Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

About The Role Grade Level (for internal use): 10 The Team As a member of the Data Transformation team you will work on building ML powered products and capabilities to power natural language understanding, data extraction, information retrieval and data sourcing solutions for S&P Global Market Intelligence and our clients. You will spearhead development of production-ready AI products and pipelines while leading-by-example in a highly engaging work environment. You will work in a (truly) global team and encouraged for thoughtful risk-taking and self-initiative. The Impact The Data Transformation team has already delivered breakthrough products and significant business value over the last 3 years. In this role you will be developing our next generation of new products while enhancing existing ones aiming at solving high-impact business problems. What’s In It For You Be a part of a global company and build solutions at enterprise scale Collaborate with a highly skilled and technically strong team Contribute to solving high complexity, high impact problems Key Responsibilities Build production ready data acquisition and transformation pipelines from ideation to deployment Being a hands-on problem solver and developer helping to extend and manage the data platforms Apply best practices in data modeling and building ETL pipelines (streaming and batch) using cloud-native solutions What We’re Looking For 3-5 years of professional software work experience Expertise in Python and Apache Spark OOP Design patterns, Test-Driven Development and Enterprise System design Experience building data processing workflows and APIs using frameworks such as FastAPI, Flask etc. Proficiency in API integration, experience working with REST APIs and integrating external & internal data sources SQL (any variant, bonus if this is a big data variant) Linux OS (e.g. bash toolset and other utilities) Version control system experience with Git, GitHub, or Azure DevOps. Problem-solving and debugging skills Software craftsmanship, adherence to Agile principles and taking pride in writing good code Techniques to communicate change to non-technical people Nice to have Core Java 17+, preferably Java 21+, and associated toolchain DevOps with a keen interest in automation Apache Avro Apache Kafka Kubernetes Cloud expertise (AWS and GCP preferably) Other JVM based languages - e.g. Kotlin, Scala C# - in particular .NET Core Data warehouses (e.g., Redshift, Snowflake, BigQuery) What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 315685 Posted On: 2025-05-20 Location: Gurgaon, Haryana, India Show more Show less

Posted 1 week ago

Apply

4.0 - 6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Greetings from TCS! TCS is hiring for Big Data (PySpark & Scala) Location: - Chennai Desired Experience Range: 4 - 6 Years Must-Have • PySpark • Hive Good-to-Have • Spark • HBase • DQ tool • Agile Scrum experience • Exposure in data ingestion from disparate sources onto Big Data platform Thanks Anshika Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Linkedin logo

When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You’ll Be Doing… We are looking for data engineers who can work with world class team members to help drive telecom business to its full potential. We are building data products / assets for telecom wireless and wireline business which includes consumer analytics, telecom network performance and service assurance analytics etc. We are working on cutting edge technologies like digital twin to build these analytical platforms and provide data support for varied AI ML implementations. As a data engineer you will be collaborating with business product owners, coaches, industry renowned data scientists and system architects to develop strategic data solutions from sources which includes batch, file and data streams As a Data Engineer with ETL/ELT expertise for our growing data platform & analytics teams, you will understand and enable the required data sets from different sources both structured and unstructured data into our data warehouse and data lake with real-time streaming and/or batch processing to generate insights and perform analytics for business teams within Verizon. Understanding the business requirements and converting them to technical design. Working on Data Ingestion, Preparation and Transformation. Developing data streaming applications. Debugging the production failures and identifying the solution. Working on ETL/ELT development. Understanding devops process and contributing for devops pipelines What We’re Looking For... You’re curious about new technologies and the game-changing possibilities it creates. You like to stay up-to-date with the latest trends and apply your technical expertise to solving business problems. You’ll need to have… Bachelor’s degree or four or more years of work experience. Experience with Data Warehouse concepts and Data Management life cycle. Experience in any DBMS Experience in Shell scripting, Spark, Scala. Knowledge in GCP/BigQuery. Experience in DevOps Even better if you have one or more of the following… Three or more years of relevant experience. Any relevant Certification on ETL/ELT developer. Certification in GCP-Data Engineer. Accuracy and attention to detail. Good problem solving, analytical, and research capabilities. Good verbal and written communication. Experience presenting to and influence stakeholders. Experience in driving a small team of 2 or more members for technical delivery #AI&D Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Show more Show less

Posted 1 week ago

Apply

2.0 - 4.0 years

8 - 12 Lacs

Mumbai

Work from Office

Naukri logo

The SAS to Databricks Migration Developer will be responsible for migrating existing SAS code, data processes, and workflows to the Databricks platform. This role requires expertise in both SAS and Databricks, with a focus on converting SAS logic into scalable PySpark and Python code. The developer will design, implement, and optimize data pipelines, ensuring seamless integration and functionality within the Databricks environment. Collaboration with various teams is essential to understand data requirements and deliver solutions that meet business needs

Posted 1 week ago

Apply

Exploring Scala Jobs in India

Scala is a popular programming language that is widely used in India, especially in the tech industry. Job seekers looking for opportunities in Scala can find a variety of roles across different cities in the country. In this article, we will dive into the Scala job market in India and provide valuable insights for job seekers.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Chennai
  5. Mumbai

These cities are known for their thriving tech ecosystem and have a high demand for Scala professionals.

Average Salary Range

The salary range for Scala professionals in India varies based on experience levels. Entry-level Scala developers can expect to earn around INR 6-8 lakhs per annum, while experienced professionals with 5+ years of experience can earn upwards of INR 15 lakhs per annum.

Career Path

In the Scala job market, a typical career path may look like: - Junior Developer - Scala Developer - Senior Developer - Tech Lead

As professionals gain more experience and expertise in Scala, they can progress to higher roles with increased responsibilities.

Related Skills

In addition to Scala expertise, employers often look for candidates with the following skills: - Java - Spark - Akka - Play Framework - Functional programming concepts

Having a good understanding of these related skills can enhance a candidate's profile and increase their chances of landing a Scala job.

Interview Questions

Here are 25 interview questions that you may encounter when applying for Scala roles:

  • What is Scala and why is it used? (basic)
  • Explain the difference between val and var in Scala. (basic)
  • What is pattern matching in Scala? (medium)
  • What are higher-order functions in Scala? (medium)
  • How does Scala support functional programming? (medium)
  • What is a case class in Scala? (basic)
  • Explain the concept of currying in Scala. (advanced)
  • What is the difference between map and flatMap in Scala? (medium)
  • How does Scala handle null values? (medium)
  • What is a trait in Scala and how is it different from an abstract class? (medium)
  • Explain the concept of implicits in Scala. (advanced)
  • What is the Akka toolkit and how is it used in Scala? (medium)
  • How does Scala handle concurrency? (advanced)
  • Explain the concept of lazy evaluation in Scala. (advanced)
  • What is the difference between List and Seq in Scala? (medium)
  • How does Scala handle exceptions? (medium)
  • What are Futures in Scala and how are they used for asynchronous programming? (advanced)
  • Explain the concept of type inference in Scala. (medium)
  • What is the difference between object and class in Scala? (basic)
  • How can you create a Singleton object in Scala? (basic)
  • What is a higher-kinded type in Scala? (advanced)
  • Explain the concept of for-comprehensions in Scala. (medium)
  • How does Scala support immutability? (medium)
  • What are the advantages of using Scala over Java? (basic)
  • How do you implement pattern matching in Scala? (medium)

Closing Remark

As you explore Scala jobs in India, remember to showcase your expertise in Scala and related skills during interviews. Prepare well, stay confident, and you'll be on your way to a successful career in Scala. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies