Jobs
Interviews

70 Scala Programming Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 11.0 years

45 - 50 Lacs

Noida, Kolkata, Chennai

Work from Office

Dear Candidate, We are hiring a Scala Developer to work on scalable data pipelines, distributed systems, and backend services. This role is perfect for candidates passionate about functional programming and big data. Key Responsibilities: Develop data-intensive applications using Scala . Work with frameworks like Akka, Play, or Spark . Design and maintain scalable microservices and ETL jobs. Collaborate with data engineers and platform teams. Write clean, testable, and well-documented code. Required Skills & Qualifications: Strong in Scala, Functional Programming, and JVM internals Experience with Apache Spark, Kafka, or Cassandra Familiar with SBT, Cats, or Scalaz Knowledge of CI/CD, Docker, and cloud deployment tools Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies

Posted 1 month ago

Apply

5.0 - 8.0 years

14 - 24 Lacs

Hyderabad, Pune

Hybrid

We are looking for a highly skilled Big Data Scala Developer with solid experience in Apache Spark to join our data engineering team. Experience- 5 to 8yrs Location- Pune, Hyderabad Mandatory skills- Scala development, Spark, Pyspark Key Responsibilities: Design, develop, and optimize batch and streaming data pipelines using Scala and Apache Spark. Write efficient, reusable, and testable code following functional programming best practices. Work with large-scale datasets from a variety of sources (e.g., Kafka, Hive, S3, Parquet). Collaborate with data scientists, data analysts, and DevOps to ensure robust and scalable pipelines. Tune Spark jobs for performance and resource efficiency. Implement data quality checks, logging, and error-handling mechanisms. Interested candidates share your CV at himani.girnar@alikethoughts.com with below details Candidate's name- Email and Alternate Email ID- Contact and Alternate Contact no- Total exp- Relevant experience- Current Org- Notice period- CCTC- ECTC- Current Location- Preferred Location- Pancard No-

Posted 1 month ago

Apply

5.0 - 8.0 years

9 - 14 Lacs

Pune

Work from Office

Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLAs defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers and clients business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLAs Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks Deliver NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT2Team ManagementProductivity, efficiency, absenteeism3Capability developmentTriages completed, Technical Test performance Mandatory Skills: Scala programming. Experience5-8 Years.

Posted 1 month ago

Apply

5.0 - 8.0 years

4 - 8 Lacs

Pune

Work from Office

Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLAs defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers and clients business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLAs Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks Deliver NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT2Team ManagementProductivity, efficiency, absenteeism3Capability developmentTriages completed, Technical Test performance Mandatory Skills: Scala programming. Experience5-8 Years.

Posted 1 month ago

Apply

5.0 - 8.0 years

9 - 14 Lacs

Bengaluru

Work from Office

Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLAs defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers and clients business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLAs Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks Deliver NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT2Team ManagementProductivity, efficiency, absenteeism3Capability developmentTriages completed, Technical Test performance Mandatory Skills: Scala programming. Experience5-8 Years.

Posted 1 month ago

Apply

3.0 - 5.0 years

3 - 7 Lacs

Pune

Work from Office

Long Description 5+ years of software development experience inbuilding and shipping production grade software. Strong knowledge in Scala. Proven development experience in software engineering on the Azure platform using CI/CD techniques with Scala. Familiar with Java/JVM/SQL. Experience with Kafka message streaming. Good to have knowledge of Akka and any functional library like ZIO, Cats etc. Familiar with working of distributed systems. Passion for delivering high quality/delightful user experience, strong problem solving, debugging, and troubleshooting skills. Ability to ramp up quickly on new technologies and adopt solution from within the company or from the Open-Source community. Deliver design, implement and operate end-to-end product experiences, drive feature velocity, modularity, component reuse and performance/reliability, in close cooperation with multiple geographically distributed teams across Product, Design, User Research and Engineering teams to deliver complex, large-scale projects. Owncomplete end-to-end ownership through software lifecycle with strong focus on solution, code quality and efficiency are expected Deliver No. Performance Parameter Measure 1. Continuous Integration, Deployment & Monitoring of Software 100% error free on boarding & implementation, throughput %, Adherence to the schedule/ release plan 2. Quality & CSAT On-Time Delivery, Manage software, Troubleshoot queries,Customer experience, completion of assigned certifications for skill upgradation 3. MIS & Reporting 100% on time MIS & report generation Mandatory Skills: Scala programming. Experience3-5 Years.

Posted 1 month ago

Apply

5.0 - 10.0 years

15 - 27 Lacs

Pune

Work from Office

Hi, Wishes from GSN!!! Pleasure connecting with you We been into Corporate Search Services for Identifying & Bringing in Stellar Talented Professionals for our reputed IT / Non-IT clients in India. We have been successfully providing results to various potential needs of our clients for the last 20 years. We have been mandated by one of our prestigious MNC client to identify Scala Developer - Pune professionals. Kindly find below the required details. ******** Looking for SHORT JOINERs ******** Position : Permanent Mandatory Skill : Scala Developer Exp Range : 5+ years Job Role : Senior Developer / Tech Lead Location : Only Pune Work Mode : WFO - All 5 Days Job Description: Bachelor's or Master's degree in Computer Science, Engineering, or a related quantitative field. Minimum 5 years of professional experience in Data Engineering, with a strong focus on big data technologies. Proficiency in Scala for developing big data applications and transformations, especially with Apache Spark. Expert-level proficiency in SQL ; ability to write complex queries, optimize performance, and understand database internals. Extensive hands-on experience with Apache Spark (Spark SQL, DataFrames, RDDs) for large-scale data processing and analytics. ******** Looking for SHORT JOINERs ******** Kindly apply ONLINE for an IMMEDIATE response. Thanks & Regards KAVIYA GSN HR Pvt Ltd Mob : 9150016092 Email : Shobana@gsnhr.net Web : www.gsnhr.net Google review : https://g.co/kgs/UAsF9W

Posted 1 month ago

Apply

2.0 - 6.0 years

7 - 11 Lacs

Bengaluru

Work from Office

About The Role This is an Internal document. Job TitleSenior Data Engineer About The Role As a Senior Data Engineer, you will play a key role in designing and implementing data solutions @Kotak811. — You will be responsible for leading data engineering projects, mentoring junior team members, and collaborating with cross-functional teams to deliver high-quality and scalable data infrastructure. — Your expertise in data architecture, performance optimization, and data integration will be instrumental in driving the success of our data initiatives. Responsibilities 1. Data Architecture and Designa. Design and develop scalable, high-performance data architecture and data models. b. Collaborate with data scientists, architects, and business stakeholders to understand data requirements and design optimal data solutions. c. Evaluate and select appropriate technologies, tools, and frameworks for data engineering projects. d. Define and enforce data engineering best practices, standards, and guidelines. 2. Data Pipeline Development & Maintenancea. Develop and maintain robust and scalable data pipelines for data ingestion, transformation, and loading for real-time and batch-use-cases b. Implement ETL processes to integrate data from various sources into data storage systems. c. Optimise data pipelines for performance, scalability, and reliability. i. Identify and resolve performance bottlenecks in data pipelines and analytical systems. ii. Monitor and analyse system performance metrics, identifying areas for improvement and implementing solutions. iii. Optimise database performance, including query tuning, indexing, and partitioning strategies. d. Implement real-time and batch data processing solutions. 3. Data Quality and Governancea. Implement data quality frameworks and processes to ensure high data integrity and consistency. b. Design and enforce data management policies and standards. c. Develop and maintain documentation, data dictionaries, and metadata repositories. d. Conduct data profiling and analysis to identify data quality issues and implement remediation strategies. 4. ML Models Deployment & Management (is a plus) This is an Internal document. a. Responsible for designing, developing, and maintaining the infrastructure and processes necessary for deploying and managing machine learning models in production environments b. Implement model deployment strategies, including containerization and orchestration using tools like Docker and Kubernetes. c. Optimise model performance and latency for real-time inference in consumer applications. d. Collaborate with DevOps teams to implement continuous integration and continuous deployment (CI/CD) processes for model deployment. e. Monitor and troubleshoot deployed models, proactively identifying and resolving performance or data-related issues. f. Implement monitoring and logging solutions to track model performance, data drift, and system health. 5. Team Leadership and Mentorshipa. Lead data engineering projects, providing technical guidance and expertise to team members. i. Conduct code reviews and ensure adherence to coding standards and best practices. b. Mentor and coach junior data engineers, fostering their professional growth and development. c. Collaborate with cross-functional teams, including data scientists, software engineers, and business analysts, to drive successful project outcomes. d. Stay abreast of emerging technologies, trends, and best practices in data engineering and share knowledge within the team. i. Participate in the evaluation and selection of data engineering tools and technologies. Qualifications1. 3-5 years"™ experience with Bachelor's Degree in Computer Science, Engineering, Technology or related field required 2. Good understanding of streaming technologies like Kafka, Spark Streaming. 3. Experience with Enterprise Business Intelligence Platform/Data platform sizing, tuning, optimization and system landscape integration in large-scale, enterprise deployments. 4. Proficiency in one of the programming language preferably Java, Scala or Python 5. Good knowledge of Agile, SDLC/CICD practices and tools 6. Must have proven experience with Hadoop, Mapreduce, Hive, Spark, Scala programming. Must have in-depth knowledge of performance tuning/optimizing data processing jobs, debugging time consuming jobs. 7. Proven experience in development of conceptual, logical, and physical data models for Hadoop, relational, EDW (enterprise data warehouse) and OLAP database solutions. 8. Good understanding of distributed systems 9. Experience working extensively in multi-petabyte DW environment 10. Experience in engineering large-scale systems in a product environment

Posted 1 month ago

Apply

5.0 - 8.0 years

15 - 25 Lacs

Noida, Greater Noida, Delhi / NCR

Work from Office

Streaming data Technical skills requirements :- Mandatory Skills- Hands on experience with Spark, Scala, AWS (Lambda, Glue, S3) -Experience- 5+ Years Solid hands-on and Solution Architecting experience in Big-Data Technologies (AWS preferred) - Hands on experience in: AWS Dynamo DB, EKS, Kafka, Kinesis, Glue, EMR - Hands-on experience of programming language like Scala with Spark. - Hands on working experience on any of the data engineering analytics platform (Hortonworks Cloudera MapR AWS), AWS preferred - Hands on working Experience with AWS Services like EMR, Kinesis, S3, CloudFormation, Glue, API Gateway, Lake Foundation - Hands on working Experience with AWS Athena - Data Warehouse exposure on Apache Nifi, Apache Airflow, Kylo - Operationalization of ML models on AWS (e.g. deployment, scheduling, model monitoring etc.) - Hands-on working experience in analysing source system data and data flows, working with structured and unstructured data - Must be very strong in writing SQL queries - Strengthen the Data engineering team with Big Data solutions - Strong technical, analytical, and problem-solving skills Experience- 5+ Years Solid hands-on and Solution Architecting experience in Big-Data Technologies (AWS preferred) - Hands on experience in: AWS Dynamo DB, EKS, Kafka, Kinesis, Glue, EMR - Hands-on experience of programming language like Scala with Spark.

Posted 1 month ago

Apply

3.0 - 6.0 years

12 - 22 Lacs

Noida

Work from Office

About CloudKeeper CloudKeeper is a cloud cost optimization partner that combines the power of group buying & commitments management, expert cloud consulting & support, and an enhanced visibility & analytics platform to reduce cloud cost & help businesses maximize the value from AWS, Microsoft Azure, & Google Cloud. A certified AWS Premier Partner, Azure Technology Consulting Partner, Google,Cloud Partner, and FinOps Foundation Premier Member, CloudKeeper has helped 400+ global companies save an average of 20% on their cloud bills, modernize their cloud set-up and maximize value all while maintaining flexibility and avoiding any long-term commitments or cost. CloudKeeper hived off from TO THE NEW, digital technology services company with 2500+ employees and an 8-time GPTW winner. Position Overview: We are looking for an experienced and driven Data Engineer to join our team. The ideal candidate will have a strong foundation in big data technologies, particularly Spark, and a basic understanding of Scala to design and implement efficient data pipelines. As a Data Engineer at CloudKeeper, you will be responsible for building and maintaining robust data infrastructure, integrating large datasets, and ensuring seamless data flow for analytical and operational purposes. Key Responsibilities: Design, develop, and maintain scalable data pipelines and ETL processes to collect, process, and store data from various sources. Work with Apache Spark to process large datasets in a distributed environment, ensuring optimal performance and scalability. Develop and optimize Spark jobs and data transformations using Scala for large-scale data processing. Collaborate with data analysts and other stakeholders to ensure data pipelines meet business and technical requirements. Integrate data from different sources (databases, APIs, cloud storage, etc.) into a unified data platform. Ensure data quality, consistency, and accuracy by building robust data validation and cleansing mechanisms. Use cloud platforms (AWS, Azure, or GCP) to deploy and manage data processing and storage solutions. Automate data workflows and tasks using appropriate tools and frameworks. Monitor and troubleshoot data pipeline performance, optimizing for efficiency and cost-effectiveness. Implement data security best practices, ensuring data privacy and compliance with industry standards. Required Qualifications: 4- 6 years of experience required as a Data Engineer or an equivalent role Strong experience working with Apache Spark with Scala for distributed data processing and big data handling. Basic knowledge of Python and its application in Spark for writing efficient data transformations and processing jobs. Proficiency in SQL for querying and manipulating large datasets.ing technologies. Experience with cloud data platforms, preferably AWS (e.g., S3, EC2, EMR, Redshift) or other cloud-based solutions. Strong knowledge of data modeling, ETL processes, and data pipeline orchestration. Familiarity with containerization (Docker) and cloud-native tools for deploying data solutions. Knowledge of data warehousing concepts and experience with tools like AWS Redshift, Google BigQuery, or Snowflake is a plus. Experience with version control systems such as Git. Strong problem-solving abilities and a proactive approach to resolving technical challenges. Excellent communication skills and the ability to work collaboratively within cross-functional teams.

Posted 1 month ago

Apply

5.0 - 8.0 years

10 - 20 Lacs

Pune

Hybrid

Job Summary: We are seeking an experienced Big Data Engineer to join our dynamic team. The ideal candidate will have strong expertise in Spark, Scala, Hadoop, and SQL , with a proven track record of building scalable data pipelines and delivering high-performance data solutions. Mandatory Skills: Strong experience in Big Data technologies Apache Spark (Core, SQL, DataFrames, RDD) Scala programming (Hands-on expertise) Hadoop Ecosystem (HDFS, MapReduce, YARN, Hive, HBase) SQL (Advanced querying, optimization, joins, aggregations) Data Ingestion & Processing : ETL development and real-time data streaming (Kafka desirable) Working knowledge of distributed computing concepts Good to Have: Experience with Cloud platforms (AWS, Azure, or GCP) Familiarity with Data Warehousing concepts Exposure to Python/Java for scripting purposes CI/CD practices for data pipeline deployments Soft Skills: Excellent problem-solving and analytical skills Strong communication and collaboration abilities Ability to work in Agile development environments

Posted 1 month ago

Apply

4.0 - 9.0 years

15 - 30 Lacs

Bengaluru, Mumbai (All Areas)

Hybrid

Purpose: You join the stream team, an experienced, informal and enthusiastic scrum team of 5 developers, working on stream-processing components to improve our data publication platform. This team is responsible for combining different sources of Sports data from all over the world into a single unified product, all in real-time. Part of the job is that you work together with international teams of developers located in Gracenote offices around the world. Job Requirements: Has experience with Scala, or other JVM languages with the capability to learn Scala Understands Stream-processing (preferably with Kafka Streams and/or Akka Streams) Is comfortable in a DevOps culture, and knows how to get their work into production. Has relevant work experience with both NoSQL (MongoDB) and SQL databases (Postgres, SQL Server) Has affinity with data and data streams. Has experience working in an Agile environment. Has good communication skills and is able to share their knowledge with the team. Has good knowledge of the English language, both spoken and written. Good to have skills: Have an affinity with sports, active or passive Understand schemas and like data modelling Are used to working with the scrum framework Have experience with other programming languages (some other languages we use are Python, Typescript and jsJava) Qualifications: B.E / B.Tech / BCA/ MCA in Computer Science, Engineering or a related subject. Strong Computer Science fundamentals. Comfortable with version control systems such as git. A thirst for learning new Tech and keeping up with industry advances. Excellent communication and knowledge-sharing skills. Comfortable working with technical and non-technical teams. Strong debugging skills. Comfortable providing and receiving code review feedback. A positive attitude, adaptability, enthusiasm, and a growth mindset.Role & responsibilities: Outline the day-to-day responsibilities for this role.

Posted 1 month ago

Apply

3.0 - 5.0 years

15 - 20 Lacs

Gurugram

Work from Office

Qualifications for Data Engineer : 3+ Years of experience in building and optimizing big data solutions required to fulfill business and technology requirements. 4+ years of technical expertise in areas of design and implementation using big data technology Hadoop, Hive, Spark, Python/Java. Strong analytic skills to understand and create solutions for business use cases. Ensure best practices to implement data governance principles, data quality checks on each data layer. A successful history of manipulating, processing and extracting value from large disconnected datasets. Working knowledge of message queuing, stream processing, and highly scalable big data data stores. Strong project management and organizational skills. Experience supporting and working with cross-functional teams in a dynamic environment. We are looking for a candidate with 4+ years of experience in a Data Engineer role, who has attained a Graduate degree in B.Tech/B.E. They should also have experience using the following software/tool- Experience with big data : Hadoop, Map Reduce, Hive, Spark, Kafka, Airflow etc Experience with relational SQL and NoSQL databases: MySQL, Postgres, MongoDB, HBase, Cassandra etc. Experience with cloud Data platform: AWS, Azure-HDInsights, GCP, CDP Experience with real time data processing: Storm, Spark-Streaming etc. Experience with object-oriented/object function scripting languages: Java, Python, Scala, etc. If Interested: Kindly fill the google form given below: amulyavaish@paisabazaar.com

Posted 1 month ago

Apply

4.0 - 9.0 years

0 - 1 Lacs

Pune, Chennai, Bengaluru

Hybrid

( Apply only if you have Scala Programming Hands-on Experience & RESTful APIs ) Role & responsibilities Responsibilities: Design, develop, and maintain high-performance, scalable, and maintainable Scala applications. Develop and maintain RESTful APIs using Scala frameworks Work with relational databases (e.g., MySQL, PostgreSQL) using SQL. Participate in all phases of the software development lifecycle, including requirements gathering, design, development, testing, and deployment.1 Troubleshoot and debug complex issues in Scala applications. Collaborate effectively with other developers, testers, and product managers. Stay up-to-date with the latest advancements in Scala, functional programming, and related technologies. Contribute to the improvement of our development processes and tools. Preferred candidate profile Required Skills: Strong proficiency in Scala programming language, including functional programming concepts. Hands-on experience with developing and deploying Scala applications. Experience with RESTful API development and design principles. Proficiency in SQL and working with relational databases. Strong understanding of Object-Oriented Programming (OOP) concepts. Excellent problem-solving and debugging skills. Strong communication and collaboration skills. Desired Skills: Experience with Scala frameworks like Play, Akka, or Spark. Experience with cloud platforms (AWS, Azure, GCP). Experience with containerization technologies (Docker, Kubernetes). Experience with Agile development methodologies (Scrum, Kanban). Contributions to open-source projects. Perks and benefits

Posted 1 month ago

Apply

12.0 - 17.0 years

35 - 40 Lacs

Hyderabad

Work from Office

Overview Deputy Director - Data Engineering PepsiCo operates in an environment undergoing immense and rapid change. Big-data and digital technologies are driving business transformation that is unlocking new capabilities and business innovations in areas like eCommerce, mobile experiences and IoT. The key to winning in these areas is being able to leverage enterprise data foundations built on PepsiCos global business scale to enable business insights, advanced analytics, and new product development. PepsiCos Data Management and Operations team is tasked with the responsibility of developing quality data collection processes, maintaining the integrity of our data foundations, and enabling business leaders and data scientists across the company to have rapid access to the data they need for decision-making and innovation. Increase awareness about available data and democratize access to it across the company. As a data engineering lead, you will be the key technical expert overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be empowered to create & lead a strong team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help lead the development of very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. Responsibilities Data engineering lead role for D&Ai data modernization (MDIP) Ideally Candidate must be flexible to work an alternative schedule either on tradition work week from Monday to Friday; or Tuesday to Saturday or Sunday to Thursday depending upon coverage requirements of the job. The candidate can work with immediate supervisor to change the work schedule on rotational basis depending on the product and project requirements. Responsibilities Manage a team of data engineers and data analysts by delegating project responsibilities and managing their flow of work as well as empowering them to realize their full potential. Design, structure and store data into unified data models and link them together to make the data reusable for downstream products. Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products. Create reusable accelerators and solutions to migrate data from legacy data warehouse platforms such as Teradata to Azure Databricks and Azure SQL. Enable and accelerate standards-based development prioritizing reuse of code, adopt test-driven development, unit testing and test automation with end-to-end observability of data Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for data pipeline quality, performance and cost. Collaborate with internal clients (product teams, sector leads, data science teams) and external partners (SI partners/data providers) to drive solutioning and clarify solution requirements. Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects to build and support the right domain architecture for each application following well-architected design standards. Define and manage SLAs for data products and processes running in production. Create documentation for learnings and knowledge transfer to internal associates. Qualifications 12+ years of engineering and data management experience Qualifications 12+ years of overall technology experience that includes at least 5+ years of hands-on software development, data engineering, and systems architecture. 8+ years of experience with Data Lakehouse, Data Warehousing, and Data Analytics tools. 6+ years of experience in SQL optimization and performance tuning on MS SQL Server, Azure SQL or any other popular RDBMS 6+ years of experience in Python/Pyspark/Scala programming on big data platforms like Databricks 4+ years in cloud data engineering experience in Azure or AWS. Fluent with Azure cloud services. Azure Data Engineering certification is a plus. Experience with integration of multi cloud services with on-premises technologies. Experience with data modelling, data warehousing, and building high-volume ETL/ELT pipelines. Experience with data profiling and data quality tools like Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one business intelligence tool such as Power BI or Tableau Experience with running and scaling applications on the cloud infrastructure and containerized services like Kubernetes. Experience with version control systems like ADO, Github and CI/CD tools for DevOps automation and deployments. Experience with Azure Data Factory, Azure Databricks and Azure Machine learning tools. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus. Understanding of metadata management, data lineage, and data glossaries is a plus. BA/BS in Computer Science, Math, Physics, or other technical fields. Candidate must be flexible to work an alternative work schedule either on tradition work week from Monday to Friday; or Tuesday to Saturday or Sunday to Thursday depending upon product and project coverage requirements of the job. Candidates are expected to be in the office at the assigned location at least 3 days a week and the days at work needs to be coordinated with immediate supervisor Skills, Abilities, Knowledge: Excellent communication skills, both verbal and written, along with the ability to influence and demonstrate confidence in communications with senior level management. Proven track record of leading, mentoring data teams. Strong change manager. Comfortable with change, especially that which arises through company growth. Ability to understand and translate business requirements into data and technical requirements. High degree of organization and ability to manage multiple, competing projects and priorities simultaneously. Positive and flexible attitude to enable adjusting to different needs in an ever-changing environment. Strong leadership, organizational and interpersonal skills; comfortable managing trade-offs. Foster a team culture of accountability, communication, and self-management. Proactively drives impact and engagement while bringing others along. Consistently attain/exceed individual and team goals. Ability to lead others without direct authority in a matrixed environment. Comfortable working in a hybrid environment with teams consisting of contractors as well as FTEs spread across multiple PepsiCo locations. Domain Knowledge in CPG industry with Supply chain/GTM background is preferred.

Posted 1 month ago

Apply

4.0 - 9.0 years

20 - 35 Lacs

Pune, Mumbai (All Areas), India

Hybrid

Exp - 4 to 8 Yrs Location - Pune (Relocation accepted) MUST HAVE - Hands-on experience of atleast 2 years latest with AKKA HTTPS & AKKA Framework along with strong Scala programming

Posted 1 month ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Pune

Work from Office

Role Purpose The purpose of this role is to design, test and maintain software programs for operating systems or applications which needs to be deployed at a client end and ensure its meet 100% quality assurance parameters Do 1. Instrumental in understanding the requirements and design of the product/ software Develop software solutions by studying information needs, studying systems flow, data usage and work processes Investigating problem areas followed by the software development life cycle Facilitate root cause analysis of the system issues and problem statement Identify ideas to improve system performance and impact availability Analyze client requirements and convert requirements to feasible design Collaborate with functional teams or systems analysts who carry out the detailed investigation into software requirements Conferring with project managers to obtain information on software capabilities 2. Perform coding and ensure optimal software/ module development Determine operational feasibility by evaluating analysis, problem definition, requirements, software development and proposed software Develop and automate processes for software validation by setting up and designing test cases/scenarios/usage cases, and executing these cases Modifying software to fix errors, adapt it to new hardware, improve its performance, or upgrade interfaces. Analyzing information to recommend and plan the installation of new systems or modifications of an existing system Ensuring that code is error free or has no bugs and test failure Preparing reports on programming project specifications, activities and status Ensure all the codes are raised as per the norm defined for project / program / account with clear description and replication patterns Compile timely, comprehensive and accurate documentation and reports as requested Coordinating with the team on daily project status and progress and documenting it Providing feedback on usability and serviceability, trace the result to quality risk and report it to concerned stakeholders 3. Status Reporting and Customer Focus on an ongoing basis with respect to project and its execution Capturing all the requirements and clarifications from the client for better quality work Taking feedback on the regular basis to ensure smooth and on time delivery Participating in continuing education and training to remain current on best practices, learn new programming languages, and better assist other team members. Consulting with engineering staff to evaluate software-hardware interfaces and develop specifications and performance requirements Document and demonstrate solutions by developing documentation, flowcharts, layouts, diagrams, charts, code comments and clear code Documenting very necessary details and reports in a formal way for proper understanding of software from client proposal to implementation Ensure good quality of interaction with customer w.r.t. e-mail content, fault report tracking, voice calls, business etiquette etc Timely Response to customer requests and no instances of complaints either internally or externally Deliver No. Performance Parameter Measure 1. Continuous Integration, Deployment & Monitoring of Software 100% error free on boarding & implementation, throughput %, Adherence to the schedule/ release plan 2. Quality & CSAT On-Time Delivery, Manage software, Troubleshoot queries,Customer

Posted 1 month ago

Apply

5.0 - 8.0 years

14 - 24 Lacs

Hyderabad, Pune

Hybrid

We are looking for a highly skilled Scala Developer with solid experience in Apache Spark to join our data engineering team. Experience- 5 to 8yrs Location- Pune, Hyderabad Mandatory skills- Scala development, Spark Key Responsibilities: Design, develop, and optimize batch and streaming data pipelines using Scala and Apache Spark. Write efficient, reusable, and testable code following functional programming best practices. Work with large-scale datasets from a variety of sources (e.g., Kafka, Hive, S3, Parquet). Collaborate with data scientists, data analysts, and DevOps to ensure robust and scalable pipelines. Tune Spark jobs for performance and resource efficiency. Implement data quality checks, logging, and error-handling mechanisms. Interested candidates share your CV at himani.girnar@alikethoughts.com with below details Candidate's name- Email and Alternate Email ID- Contact and Alternate Contact no- Total exp- Relevant experience- Current Org- Notice period- CCTC- ECTC- Current Location- Preferred Location- Pancard No-

Posted 1 month ago

Apply

5.0 - 8.0 years

14 - 24 Lacs

Hyderabad, Pune

Hybrid

We are looking for a highly skilled Big Data Scala Developer with solid experience in Apache Spark to join our data engineering team. Experience- 5 to 8yrs Location- Pune, Hyderabad Mandatory skills- Scala development, Spark Key Responsibilities: Design, develop, and optimize batch and streaming data pipelines using Scala and Apache Spark. Write efficient, reusable, and testable code following functional programming best practices. Work with large-scale datasets from a variety of sources (e.g., Kafka, Hive, S3, Parquet). Collaborate with data scientists, data analysts, and DevOps to ensure robust and scalable pipelines. Tune Spark jobs for performance and resource efficiency. Implement data quality checks, logging, and error-handling mechanisms. Interested candidates share your CV at himani.girnar@alikethoughts.com with below details Candidate's name- Email and Alternate Email ID- Contact and Alternate Contact no- Total exp- Relevant experience- Current Org- Notice period- CCTC- ECTC- Current Location- Preferred Location- Pancard No-

Posted 1 month ago

Apply

2.0 - 6.0 years

8 - 13 Lacs

Pune

Hybrid

Job Title: Scala developer Employment Type: Full-time Experience Level: 2-6Years Location: Pune - Kalyani Nagar Shift Timings: General Shift About Cybage Cybage Software is a global technology consulting organization headquartered in Pune. With over 7,000 skilled professionals, we deliver dependable and seamless services to clients across the globe. Our presence spans across GNR and Hyderabad in India, and internationally in the USA, UK, Germany, Ireland, Japan, Canada, Australia, and Singapore. We work with a wide range of industries including Media & Advertising, Travel & Hospitality, Digital Retail, Healthcare & Life Sciences, Supply Chain & Logistics, and Technology. About the Role We are looking for a Scala developer to take ownership of the design and development of high-performance, scalable applications. The ideal candidate should have strong hands-on experience with Scala and Akka, along with a solid understanding of reactive systems and domain-driven design principles. You will lead development efforts and mentor a team of engineers, ensuring the delivery of robust solutions in a collaborative environment. Key Responsibilities Drive the implementation of functional programming concepts and best practices. Design and maintain reactive, event-driven systems with Akka Streams and Akka HTTP. Mentor junior team members and provide technical leadership throughout the SDLC. Collaborate with DevOps and QA to ensure CI/CD, testing, and deployment standards are met. Write clean, maintainable code and ensure best practices are followed. Required Skills and Qualifications 2-6 years of experience in software development, with at least 2 years in Scala. Solid experience with Akka Streams, Akka HTTP, and functional programming. Good understanding of domain-driven design and reactive systems. Proven ability to lead technical discussions and guide development teams. Experience with CI/CD pipelines, code quality tools, and modern development workflows. Familiarity with cloud environments (AWS, Azure, or GCP) and containerization (Docker, Kubernetes). Preferred Qualifications Strong communication and collaboration skills. Academic Performance: Minimum 60% in any two of the following: Secondary, Higher Secondary, and Graduation. Minimum 55% in the third.

Posted 1 month ago

Apply

0.0 - 4.0 years

3 - 7 Lacs

Pune

Work from Office

Wipro Limited (NYSEWIT, BSE507685, NSEWIPRO) is a leading technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our customers, colleagues, and communities thrive in an ever-changing world. For additional information, visit us at www.wipro.com. About The Role Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. ? Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLA’s defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements ? Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers’ and clients’ business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLA’s ? Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks ? Deliver NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT2Team ManagementProductivity, efficiency, absenteeism3Capability developmentTriages completed, Technical Test performance Mandatory Skills: Scala programming. Experience5-8 Years. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.

Posted 1 month ago

Apply

3.0 - 6.0 years

12 - 22 Lacs

Noida

Work from Office

About CloudKeeper CloudKeeper is a cloud cost optimization partner that combines the power of group buying & commitments management, expert cloud consulting & support, and an enhanced visibility & analytics platform to reduce cloud cost & help businesses maximize the value from AWS, Microsoft Azure, & Google Cloud. A certified AWS Premier Partner, Azure Technology Consulting Partner, Google,Cloud Partner, and FinOps Foundation Premier Member, CloudKeeper has helped 400+ global companies save an average of 20% on their cloud bills, modernize their cloud set-up and maximize value all while maintaining flexibility and avoiding any long-term commitments or cost. CloudKeeper hived off from TO THE NEW, digital technology services company with 2500+ employees and an 8-time GPTW winner. Position Overview: We are looking for an experienced and driven Data Engineer to join our team. The ideal candidate will have a strong foundation in big data technologies, particularly Spark, and a basic understanding of Scala to design and implement efficient data pipelines. As a Data Engineer at CloudKeeper, you will be responsible for building and maintaining robust data infrastructure, integrating large datasets, and ensuring seamless data flow for analytical and operational purposes. Key Responsibilities: Design, develop, and maintain scalable data pipelines and ETL processes to collect, process, and store data from various sources. Work with Apache Spark to process large datasets in a distributed environment, ensuring optimal performance and scalability. Develop and optimize Spark jobs and data transformations using Scala for large-scale data processing. Collaborate with data analysts and other stakeholders to ensure data pipelines meet business and technical requirements. Integrate data from different sources (databases, APIs, cloud storage, etc.) into a unified data platform. Ensure data quality, consistency, and accuracy by building robust data validation and cleansing mechanisms. Use cloud platforms (AWS, Azure, or GCP) to deploy and manage data processing and storage solutions. Automate data workflows and tasks using appropriate tools and frameworks. Monitor and troubleshoot data pipeline performance, optimizing for efficiency and cost-effectiveness. Implement data security best practices, ensuring data privacy and compliance with industry standards. Required Qualifications: 4- 6 years of experience required as a Data Engineer or an equivalent role Strong experience working with Apache Spark with Scala for distributed data processing and big data handling. Basic knowledge of Python and its application in Spark for writing efficient data transformations and processing jobs. Proficiency in SQL for querying and manipulating large datasets.ing technologies. Experience with cloud data platforms, preferably AWS (e.g., S3, EC2, EMR, Redshift) or other cloud-based solutions. Strong knowledge of data modeling, ETL processes, and data pipeline orchestration. Familiarity with containerization (Docker) and cloud-native tools for deploying data solutions. Knowledge of data warehousing concepts and experience with tools like AWS Redshift, Google BigQuery, or Snowflake is a plus. Experience with version control systems such as Git. Strong problem-solving abilities and a proactive approach to resolving technical challenges. Excellent communication skills and the ability to work collaboratively within cross-functional teams.

Posted 1 month ago

Apply

10.0 - 17.0 years

15 - 20 Lacs

Pune

Hybrid

Job Title: Scala Technical/Application Architect Employment Type: [Full-time] Experience Level: 10+ Years About Cybage: Cybage Software is a technology consulting organization and the head office is in Pune; you will get an opportunity to be a part of highly skilled talent pool of more than 7000 employees. We have our operations hub in GNR and Hyderabad as well and we have also marked our presence in USA, UK, Japan, Germany, Ireland, Canada, Australia, and Singapore. We provide seamless services and dependable deliveries to our clients from diverse industry verticals such as Media and Advertising, Travel and Hospitality, Digital Retail, Healthcare and Life Sciences, Supply chain and logistics, and Technology. About the Role: We are looking for an experienced Scala Architect to design and lead the development of scalable, high-performance systems. Youll bring deep expertise in Scala (2 & 3), Akka Streams, and domain-driven design to architect systems capable of handling high transaction volumes. Required Skills and Qualifications: 10+ years of overall software development experience, with 4+ years in Scala development, including both Scala 2.x and 3.x. Proven experience architecting and delivering highly scalable, transactional platforms. Expertise in Akka Streams, Akka HTTP, and related reactive programming libraries. Strong grasp of domain-driven design (DDD) and functional programming principles. Deep understanding of streaming architectures, back-pressure handling, and event-driven systems. Demonstrated experience leading technical design efforts for mission-critical applications. Proficient in integrating with modern CI/CD, testing, and deployment pipelines. Familiar with cloud-native architectures (e.g., AWS, GCP, or Azure) and containerized environments (Docker, Kubernetes). Responsibilities: Architect scalable, distributed systems using Scala and Akka Design domain models aligned with business needs Ensure performance for high-volume, transactional workloads Lead and mentor teams in Scala, FP, and Akka Streams Own technical design, documentation, and delivery Collaborate across teams for end-to-end solution success Shift timings General shift Location Pune-Kalyani Nagar It's good to have: 60% & above in any two of the following and 55% & above in the third Secondary, Higher Secondary (or it s equivalent) and Graduation level (aggregate) Strong Communication & interpersonal Skills.

Posted 2 months ago

Apply

5.0 - 10.0 years

15 - 20 Lacs

Hyderabad, Chennai, Mumbai (All Areas)

Hybrid

Scala Developer: Designing, creating, and maintaining Scala-based applications Participating in all architectural development tasks related to the application. Writing code in accordance with the app requirements Performing software analysis Working as a member of a software development team to ensure that the program meets standards Application testing and debugging Making suggestions for enhancements to application procedures and infrastructure.

Posted 2 months ago

Apply

6.0 - 9.0 years

9 - 18 Lacs

Pune, Chennai

Work from Office

Job Title: Data Engineer (Spark/Scala/Cloudera) Location: Chennai/Pune Job Type : Full time Experience Level: 6- 9 years Job Summary: We are seeking a skilled and motivated Data Engineer to join our data engineering team. The ideal candidate will have deep experience with Apache Spark, Scala, and Cloudera Hadoop ecosystem. You will be responsible for building scalable data pipelines, optimizing data processing workflows, and ensuring the reliability and performance of our big data platform. Key Responsibilities: Design, build, and maintain scalable and efficient ETL/ELT pipelines using Spark and Scala. Work with large-scale datasets on the Cloudera Data Platform (CDP). Collaborate with data scientists, analysts, and other stakeholders to ensure data availability and quality. Optimize Spark jobs for performance and resource utilization. Implement and maintain data governance, security, and compliance standards. Monitor and troubleshoot data pipeline failures and ensure high data reliability. Participate in code reviews, testing, and deployment activities. Document architecture, processes, and best practices. Required Skills and Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or related field. 6+ years of experience in big data engineering roles. 2 + Years of Hands on experience into Scala Proficient in Apache Spark (Core/DataFrame/SQL/RDD APIs). Strong programming skills in Scala. Hands-on experience with the Cloudera Hadoop ecosystem (e.g., HDFS, Hive, Impala, HBase, Oozie). Familiarity with distributed computing and data partitioning concepts. Strong understanding of data structures, algorithms, and software engineering principles. Experience with CI/CD pipelines and version control systems (e.g., Git). Familiarity with cloud platforms (AWS, Azure, or GCP) is a plus. Preferred Qualifications: Experience with Cloudera Manager and Cloudera Navigator. Exposure to Kafka, NiFi, or Airflow. Familiarity with data lake, data warehouse, and lakehouse architectures. Preferred candidate profile

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies