Jobs
Interviews

6093 Scala Jobs - Page 31

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 13.0 years

35 - 55 Lacs

Chennai, Bengaluru

Work from Office

3+ yrs in Scala Familiar with Akka framework 8+ yrs exp in software development Good grasp of functional programming Experience with RESTful microservices Strong in TDD using ScalaTest / Mockito Hands-on with SQL Server / Oracle / MySQL Required Candidate profile Comfortable with Docker & Kubernetes Knowledge of AWS / Azure Experience with sbt / Maven Skilled in Git + CI/CD workflows Proven ability in cloud-based systems Build and maintain REST APIs & service

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Scala Developer Experience : 7+ Yrs Location : Mumbai Duration : 6 months & extendable Budget : up to 9LPA Job Summary: We are looking for a skilled Scala Developer with at least 5+ years of professional experience in building scalable, high-performance backend applications. The ideal candidate should have a strong grasp of functional programming, data processing frameworks, and cloud-based environments. Key Responsibilities: Design, develop, test, and deploy backend services and APIs using Scala. Collaborate with cross-functional teams including product managers, frontend developers, and QA engineers. Optimize and maintain existing codebases, ensuring performance, scalability, and reliability. Write clean, well-documented, and testable code following best practices. Work with tools and technologies like Akka, Play Framework, and Kafka. Participate in code reviews, knowledge sharing, and mentoring junior developers. Integrate with SQL/NoSQL databases and third-party APIs. Build and maintain data pipelines using Spark or similar tools (if required). Required Skills: Strong hands-on experience with Scala and the functional programming paradigm. Experience with Play Framework, Akka, or Lagom. Proficiency in working with RESTful APIs, Microservices Architecture, and API integration. Good understanding of concurrency, asynchronous programming, and stream processing. Hands-on experience with SQL/NoSQL databases like PostgreSQL, MySQL, Cassandra, or MongoDB. Familiarity with build tools like SBT or Maven. Comfortable using Git, Docker, and CI/CD pipelines. Experience working in Agile/Scrum environments. Preferred/Good to Have: Experience with Apache Spark, Kafka, or similar big data technologies. Exposure to AWS/GCP/Azure. Understanding of DevOps principles. Knowledge of testing frameworks like ScalaTest, Specs2, or Mockito.

Posted 2 weeks ago

Apply

100.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title - Enterprise Technology Engineer Location - Pune Let me tell you about the role A data engineer designs, constructs, installs, tests, and maintains highly scalable data management systems. They are responsible for building the infrastructure that allows for the generation, collection, and analysis of large datasets. Key responsibilities include developing, constructing, testing, and maintaining architectures such as databases and large-scale processing systems, ensuring that architectures support data analytics, and preparing data for prescriptive and predictive modeling. Data engineers also develop data set processes for data modeling, mining, and production, integrate new data management technologies and software engineering tools into existing structures, and collaborate with data scientists and analysts to ensure data accuracy and accessibility. They play a critical role in enabling the data-driven decision-making process by ensuring that data pipelines are robust, efficient, and scalable What you will deliver Part of a cross-disciplinary team, working closely with other data engineers, software engineers, data scientists, data managers and business partners. Implements and maintains reliable and scalable data infrastructure to move, process and serve data. Writes, deploys and maintains software to build, integrate, manage, maintain, and quality-assure data at bp. Adheres to and advocates for software engineering best practices (e.g. technical design, technical design review, unit testing, monitoring & alerting, checking in code, code review, documentation), code reuse). Adheres to and advocates for data engineering best practices(e.g. data modeling, pipeline idempotency, operational observability) Responsible for deploying secure and well-tested software and data-assets that meet privacy and compliance requirements; develops, maintains and improves CI / CD pipeline, Responsible for service reliability and following site-reliability engineering best practices: on-call rotations for services they maintain, responsible for defining and maintaining SLAs. Help design, build, deploy and maintain infrastructure as code. Containerizes server deployments. Actively contributes to improve developer velocity. What you will need to be successful (experience and qualifications) Essential Hands-on experience designing, planning, building, productionizing, maintaining and documenting reliable and scalable data infrastructure and data products in complex environments Development experience in one or more object-oriented programming languages (e.g. Python, Scala, Java, C#) Experience with SQL and noSQL database fundamentals, query structures and design best practices, including scalability, readability, and reliability Experience implementing large-scale distributed systems in collaboration with more senior team members Knowledge and hands-on experience in technologies across all data lifecycle stages Strong verbal and written communication skills Continuous learning and improvement mindset BS degree in computer science or related field or equivalent knowledge and experience Desired Communication and Articulation Learnability and Coachability, Understanding Business Case Cloud Architecture and Services About bp Our purpose is to deliver energy to the world, today and tomorrow. For over 100 years, bp has focused on discovering, developing, and producing oil and gas in the nations where we operate. We are one of the few companies globally that can provide governments and customers with an integrated energy offering. Delivering our strategy sustainably is fundamental to achieving our ambition to be a net zero company by 2050 or sooner! We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform crucial job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 2 weeks ago

Apply

4.0 years

15 - 25 Lacs

Bengaluru, Karnataka, India

On-site

Key Responsibilities Partner with product managers, engineers, and business stakeholders to define KPIs and success metrics for Creator Success Create comprehensive dashboards and self-service analytics tools using QuickSight, Tableau, or similar BI platforms Design, build, and maintain robust ETL/ELT pipelines to process large volumes of streaming and batch data from Creator Success platform Develop and optimize data warehouses, data lakes, and real-time analytics systems using AWS services (Redshift, S3, Kinesis, EMR, Glue) Build automated data validation and alerting mechanisms for critical business metrics Generate actionable insights from complex datasets to drive product roadmap and business strategy Required Qualifications Bachelor's degree in Computer Science, Engineering, Mathematics, Statistics, or related quantitative field 4+ years of experience in business intelligence/analytic roles with proficiency in SQL, Python, and/or Scala Strong experience with AWS cloud services (Redshift, S3, EMR, Glue, Lambda, Kinesis) Expertise in building and optimizing ETL pipelines and data warehousing solutions Proficiency with big data technologies (Spark, Hadoop) and distributed computing frameworks Experience with business intelligence tools (QuickSight, Tableau, Looker) and data visualization best practices High proficiency in SQL and Python Expertise in building and optimizing ETL pipelines and data warehousing solutions Experience with business intelligence tools (QuickSight, Tableau, Looker) and data visualization best practices Experience with AWS cloud services (Redshift, S3, EMR) Skills: spark,python,aws,looker,scala,hadoop,sql,power bi,aws s3,business intelligence,aws emr,aws glue,tableau,aws kinesis,quicksight,aws redshift

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Gujarat, India

On-site

Job Summary: We are looking for a highly skilled and self-motivated Technical Lead - Data, Cloud & AI Lead - to design, develop, and optimize data pipelines and infrastructure that power AI/ML solutions in the cloud. The ideal candidate will have deep experience in data engineering, strong exposure to cloud platforms, and familiarity with machine learning workflows. FinOps and related experience is preferred. This role will play a critical part in enabling data-driven innovation and scaling intelligent applications across the organization . Required Skills & Experience: 6+ years of experience in data engineering with a strong understanding of data architecture Hands-on experience with cloud platforms: AWS (Glue, S3, Redshift) or GCP (BigQuery, Dataflow) Strong programming skills in Python, Java, SQL; knowledge of Spark or Scala is a plus Experience with ETL/ELT tools and orchestration frameworks like Apache Airflow, DBT, or Prefect Familiarity with machine learning workflows, model lifecycle, and MLOps practices Proficient in working with both batch and streaming data (Kafka, Kinesis, Pub/Sub) Experience with containerization and deployment (Docker, Kubernetes is a plus) Good understanding of data security and access control in cloud environments

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Company Description At Innovatics, we help conquer tough business challenges with advanced analytics and AI. Specializing in transforming complexity into clarity and business uncertainties into data-driven opportunities, our dedicated team of data analytics and AI consultants are committed to achieving tangible results. Our services, provided in the USA, Australia, Canada, and India, include end-to-end data analytics, data strategy, data engineering, and AI consulting. Passionate about data, we pride ourselves on turning ideas into actionable insights. Role Description This is a full-time on-site role located in Ahmedabad for a Sr. Data Engineer at Innovatics. The Sr. Data Engineer will be responsible for the design, development, and optimization of data architectures and pipelines. Day-to-day tasks include data modeling, building and managing ETL processes, data warehousing, and performing data analytics to support business decisions. The role involves collaborating with data scientists, analysts, and other stakeholders to ensure efficient and effective data solutions. Job Description: 5+ years of experience in a Data Engineer role Experience with object-oriented/object function scripting languages: Python, Scala, Golang, Java, etc. Experience with Big data tools such as Spark, Hadoop/ Kafka/ Airflow/Hive Experience with Streaming data: Spark/Kinesis/Kafka/Pubsub/Event Hub Experience with GCP/Azure data factory/AWS Strong in SQL Scripting Experience with ETL tools Knowledge of Snowflake Data Warehouse Knowledge of Orchestration frameworks: Airflow/Luig Good to have knowledge of Data Quality Management frameworks Good to have knowledge of Master Data Management Self-learning abilities are a must Familiarity with upcoming new technologies is a strong plus. Should have a bachelor's degree in big data analytics, computer engineering, or a related field Candidates Attribute: Experience in Data Engineering, including design and development of data architectures Proficiency in Data Modeling to support the data needs of various projects Skills in Extract Transform Load (ETL) processes to ensure smooth data integration Knowledge of Data Warehousing to manage and store large datasets efficiently Strong Data Analytics skills to derive actionable insights from data Excellent problem-solving and analytical skills Ability to work independently and collaboratively in a team environment Bachelor's or master's degree in Computer Science, Engineering, or related field Experience in the AI and advanced analytics field is a plus

Posted 2 weeks ago

Apply

0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Inviting applications for the role of Senior Principal Consultant, Data Scientist for one of our Client (MNC) In this role, we are looking for candidates who have relevant years of experience in Text Mining / Natural Language Processing (NLP) tools, Data sciences, Big Data and algorithms. Full cycle experience desirable in at least 1 Large Scale Text Mining/NLP project from creating a Business use case, Text Analytics assessment/roadmap, Technology & Analytic Solutioning, Implementation and Change Management, considerable experience in Hadoop including development in map-reduce framework. The Text Mining Scientist (TMS) is expected to play a pivotal bridging role between enterprise database teams, and business /functional resources. At a broad level, the TMS will leverage his/her solutioning expertise to translate the customer’s business need into a techno-analytic problem and appropriately work with database teams to bring large scale text analytic solutions to fruition. The right candidate should have prior experience in developing text mining and NLP solutions using open source tools. Responsibilities Develop transformative AI/ML solutions to address our clients' business requirements and challenges Project Delivery - This would entail successful delivery of projects involving data Pre-processing, Model Training and Evaluation, Parameter Tuning Manage Stakeholder/Customer Expectations Project Blue Printing and Project Documentation Creating Project Plan Understand and research cutting edge industrial and academic developments in AI/ML with NLP/NLU applications in diverse industries such as CPG, Finance etc. Conceptualize, Design, build and develop solution algorithms which demonstrate the minimum required functionality within tight timelines Interact with clients to collect, synthesize, and propose requirements and create effective analytics/text mining roadmap. Work with digital development teams to integrate and transform these algorithms into production quality applications Do applied research on a wide array of text analytics and machine learning projects, file patents and publish the papers Collaborate with service line teams to design, implement and manage Gen-AI solution Familiarity with generative models, prompt engineering, and fine-tuning techniques to develop innovative AI solutions. Designing, developing, and implementing solutions tailored to meet client needs. Understanding business requirements and translating them into technical solutions using GEN AI Works closely with service line teams to design, implement, and manage Generative AI solutions Qualifications we seek in you! Minimum Qualifications / Skills MS in Computer Science, Information systems, or Computer engineering Systems Engineering with relevant experience in Text Mining / Natural Language Processing (NLP) tools, Data sciences, Big Data and algorithms Familiarity with Generative AI technologies, Design and Implement GenAI Solutions Technology Open Source Text Mining paradigms such as NLTK, OpenNLP, OpenCalais, StanfordNLP, GATE, UIMA, Lucene, and cloud based NLU tools such as DialogFlow, MS LUIS Exposure to Statistical Toolkits such as R, Weka, S-Plus, Matlab, SAS-Text Miner Strong Core Java experience in large scale product development and functional knowledge of RDBMs Hands on to programing in the Hadoop ecosystem, and concepts in distributed computing Very good python/R programming skills. Java programming skills a plus GenAI Tools Certifications in AI/ML or GenAI Methodology Solutioning & Consulting experience in verticals such as BFSI, CPG, with experience in delivering text analytics on large structured and unstructured data A solid foundation in AI Methodologies like ML, DL, NLP, Neural Networks, Information Retrieval and Extraction, NLG, NLU Exposed to concepts in Natural Language Processing & Statistics, esp., in their application such as Sentiment Analysis, Contextual NLP, Dependency Parsing, Parsing, Chunking, Summarization, etc Demonstrated ability to Conduct look-ahead client research with focus on supplementing and strengthening the client’s analytics agenda with newer tools and techniques Preferred Qualifications/ Skills Technology Expert level of understanding of NLP, NLU and Machine learning/Deep learning methods OpenNLP, OpenCalais, StanfordNLP, GATE, UIMA, Lucene, NoSQL UI development paradigms that would enable Text Mining Insights Visualization, e.g., Adobe Flex Builder, HTML5, CSS3 GenAI AI/ML Tools Linux, Windows, GPU Experience Spark, Scala for distributed computing Deep learning frameworks such as TensorFlow, Keras, Torch, Theano Certifications in AI/ML or GenAI Methodology Social Network modeling paradigms, tools & techniques Text Analytics using Natural Language Processing tools such as Support Vector Machines and Social Network Analysis Previous experience with Text analytics implementations, using open source packages and or SAS-Text Miner Ability to Prioritize, Consultative mindset & Time management skills

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Thiruvananthapuram Taluk, India

On-site

Position- Data Engineer Experience- 3+ years Location : Trivandrum, Hybrid Salary : Upto 8 LPA Job Summary: We are seeking a highly motivated and skilled Data Engineer with 3+ years of experience to join our growing data team. In this role, you will be instrumental in designing, building, and maintaining robust, scalable, and efficient data pipelines and infrastructure. You will work closely with data scientists, analysts, and other engineering teams to ensure data availability, quality, and accessibility for various analytical and machine learning initiatives. Key Responsibilities: ● Design and Development: ○ Design, develop, and optimize scalable ETL/ELT pipelines to ingest, transform, and load data from diverse sources into data warehouses/lakes. ○ Implement data models and schemas that support analytical and reporting requirements. ○ Build and maintain robust data APIs for data consumption by various applications and services. ● Data Infrastructure: ○ Contribute to the architecture and evolution of our data platform, leveraging cloud services (AWS, Azure, GCP) or on-premise solutions. ○ Ensure data security, privacy, and compliance with relevant regulations. ○ Monitor data pipelines for performance, reliability, and data quality, implementing alerting and anomaly detection. ● Collaboration & Optimization: ○ Collaborate with data scientists, business analysts, and product managers to understand data requirements and translate them into technical solutions. ○ Optimize existing data processes for efficiency, cost-effectiveness, and performance. ○ Participate in code reviews, contribute to documentation, and uphold best practices in data engineering. ● Troubleshooting & Support: ○ Diagnose and resolve data-related issues, ensuring minimal disruption to data consumers. ○ Provide support and expertise to teams consuming data from the data platform. Required Qualifications: ● Bachelor's degree in Computer Science, Engineering, or a related quantitative field. ● 3+ years of hands-on experience as a Data Engineer or in a similar role. ● Strong proficiency in at least one programming language commonly used for data engineering (e.g., Python, Java, Scala). ● Extensive experience with SQL and relational databases (e.g., PostgreSQL, MySQL, SQL Server). ● Proven experience with ETL/ELT tools and concepts. ● Experience with data warehousing concepts and technologies (e.g., Snowflake, Redshift, BigQuery, Azure Synapse, Data Bricks). ● Familiarity with cloud platforms (AWS, Azure, or GCP) and their data services (e.g., S3, EC2, Lambda, Glue, Data Factory, Blob Storage, BigQuery, Dataflow). ● Understanding of data modeling techniques (e.g., dimensional modeling, Kimball, Inmon). ● Experience with version control systems (e.g., Git). ● Excellent problem-solving, analytical, and communication skills. Preferred Qualifications: ● Master's degree in a relevant field. ● Experience with Apache Spark (PySpark, Scala Spark) or other big data processing frameworks. ● Familiarity with NoSQL databases (e.g., MongoDB, Cassandra). ● Experience with data streaming technologies (e.g., Kafka, Kinesis). ● Knowledge of containerization technologies (e.g., Docker, Kubernetes). ● Experience with workflow orchestration tools (e.g., Apache Airflow, Azure Data Factory, AWS Step Functions). ● Understanding of DevOps principles as applied to data pipelines. ● Prior experience in Telecom is a plus.

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About Client: Our Client is an Indian multinational technology company based in Bengaluru. It provides information technology, consulting and business process services, and is one of India's Big Six IT services companies. Services include cloud computing, computer security, digital transformation, artificial intelligence, robotics, data analytics, and other technologies. Job Title: Azure SQL Database Engineer Location: Pune, Kharadi Experience: 6+ years Job Type : Contract Notice Period: Immediate joiners Key Skills Proficient SQL expertise proven experience on developing solutions on any One programming language Data Testing: ETL testing, data validation, transformation checks SQL-based testing Hands On experience - Java/ Python Good To have Automation experience on Backend Python/ Java/Scala/Pyspark Databricks usage, Azure Knowledge

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About Client: Our Client is a multinational IT services and consulting company headquartered in USA, With revenues 19.7 Billion USD, with Global work force of 3,50,000 and Listed in NASDAQ, It is one of the leading IT services firms globally, known for its work in digital transformation, technology consulting, and business process outsourcing, Business Focus on Digital Engineering, Cloud Services, AI and Data Analytics, Enterprise Applications ( SAP, Oracle, Sales Force ), IT Infrastructure, Business Process Out Source. Major delivery centers in India, including cities like Chennai, Pune, Hyderabad, and Bengaluru. Offices in over 35 countries. India is a major operational hub, with as its U.S. headquarters. Job Title : Non-Cloud QA Test Engineer Job Locations : Bangalore Experience : 6+ Years Education Qualification : Any Graduation. Work Mode : Hybrid. Employment Type : Contract. Notice Period : Immediate JD: Core Skills: Technical skills – Proficient SQL expertise , proven experience on developing solutions on any One programming language, Data Testing: ETL testing, data validation, transformation checks, SQL-based testing, Hands On experience - Java/ Python Good To have : Automation experience on Backend, Scripting: Python/ Java/Scala/Pyspark, Databricks usage, Azure Knowledge

Posted 2 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

About Client : Our Client is a multinational IT services and consulting company headquartered in USA, With revenues 19.7 Billion USD, with Global work force of 3,50,000 and Listed in NASDAQ, It is one of the leading IT services firms globally, known for its work in digital transformation, technology consulting, and business process outsourcing, Business Focus on Digital Engineering, Cloud Services, AI and Data Analytics, Enterprise Applications ( SAP, Oracle, Sales Force ), IT Infrastructure, Business Process Out Source. Major delivery centers in India, including cities like Chennai, Pune, Hyderabad, and Bengaluru. Offices in over 35 countries. India is a major operational hub, with as its U.S. headquarters. · Job Title : Big Data Testing · Mode of Interview: Virtual · Location : Pune · Experience : 6-12 yrs · Mode of Work : Hybrid · Job Type : Contract to hire. · Notice Period :- Immediate Job Description: Core Skills: Technical skills – Proficient SQL expertise , proven experience on developing solutions on any One programming language, Data Testing: ETL testing, data validation, transformation checks, SQL-based testing, Hands On experience - Java/ Python Good To have : Automation experience on Backend, Scripting: Python/ Java/Scala/Pyspark, Databricks usage, Azure Knowledge

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Our client is a leading technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, They help clients realize their boldest ambitions and build future-ready, sustainable businesses. With over 230,000 employees and business partners across 65 countries, They deliver on the promise of helping our clients, colleagues, and communities thrive in an ever-changing world. Job Title : Non - cloud QA Test Engineer Key Skills : SQL, ETL testing Job Locations : Pune Experience : 6+ years Work Mode : Hybrid Employment Type : Contract Notice Period : Immediate - 15 Day Job Description: Must have: Technical skills – Proficient SQL expertise proven experience on developing solutions on any One programming language Data Testing: ETL testing, data validation, transformation checks, SQL-based testing Hands On experience - Java/ Python Good To have : Automation experience on Back end Scripting: Python/ Java/Scala/Pyspark Data bricks usage Azure Knowledge Interested candidates please share resume barkavi@people-prime.com

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Tesco India • Bengaluru, Karnataka, India • Hybrid • Full-Time • Permanent • Apply by 23-Jul-2025 About the role Become a quick learner and be more proactive in understanding the wider Business requirements and linking to the various other concepts in the Domain. Help implement better solutions independently and faster with better ownership. Help automating manual operational tasks and focus on creating reusable assets and propel innovation. Work very closely with team members and able to have healthy relationship Be innovative and able to come up with ideas and reusable components & frameworks Should be ready to Support 24x7 as per the Rota Should be based in Bangalore or is in the process of moving, should be ready to come to Tesco office when asked for. What is in it for you At Tesco, we are committed to providing the best for you. As a result, our colleagues enjoy a unique, differentiated, market- competitive reward package, based on the current industry practices, for all the work they put into serving our customers, communities and planet a little better every day. Our Tesco Rewards framework consists of pillars - Fixed Pay, Incentives, and Benefits. Total Rewards offered at Tesco is determined by four principles -simple, fair, competitive, and sustainable. Salary - Your fixed pay is the guaranteed pay as per your contract of employment. Leave & Time-off - Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays, as per the company’s policy. Making Retirement Tension-FreeSalary - In addition to Statutory retirement beneets, Tesco enables colleagues to participate in voluntary programmes like NPS and VPF. Health is Wealth - Tesco promotes programmes that support a culture of health and wellness including insurance for colleagues and their family. Our medical insurance provides coverage for dependents including parents or in-laws. Mental Wellbeing - We offer mental health support through self-help tools, community groups, ally networks, face-to-face counselling, and more for both colleagues and dependents. Financial Wellbeing - Through our financial literacy partner, we offer one-to-one financial coaching at discounted rates, as well as salary advances on earned wages upon request. Save As You Earn (SAYE) - Our SAYE programme allows colleagues to transition from being employees to Tesco shareholders through a structured 3-year savings plan. Physical Wellbeing - Our green campus promotes physical wellbeing with facilities that include a cricket pitch, football field, badminton and volleyball courts, along with indoor games, encouraging a healthier lifestyle. You will be responsible for Should have worked on building large scale distributed systems Should have lead a team in tech lead/module lead role Should have good mentorship experience Should have good communication and very good documentation skills Should show maturity & understand the requirement and convert to high quality technical requirements Should Code and Design end to end data flow and deliver on time Be resilient & flexible to work across multiple teams and internal teams Should help to Implement best practices in Data Architecture and Enterprise Software Development. Should have extensive experience in working in Agile Data Engineering Teams Work very closely with Engineering Manager, TPM, Product Manager and Stake Holders. You will need Basic concepts of Data Engineering, Ingestion from diverse sources and file formats, Hadoop, Data Warehousing, Designing & Implementing large scale Distributed Data platforms & Data Lakes Building distributed platforms or services SQL, Spark, Query Tuning & Performance Optimization General advanced Scala or Java experience (e.g Functional Programming, using Case classes, Complex Data Structures & Algorithms) Experience on SOLID & Dry principles and Good Software Architecture & Design Experience Languages: Python, Java, Scala Good experience in Big Data Unit, System, Integration & Regression Testing Devops experience in Jenkins, Maven ,Github, Artifactory/Jfrog, CI/CD Big Data Processing: Hadoop, Sqoop, Spark and Spark Streaming Hadoop Distributions: Cloudea / Hortonworks experience Data Streaming: Experience on Kafka and Spark Streaming Data Validation & Data Quality Data Lake & Medallion Architecture Shell Scripting & Automation using Ansible or related Configuration management tools Agile processes & tools like Jira & Confluence Code Management tools like Git File formats like ORC, Avro, Parquet, Json & CSV Big Data Orchastration: Nifi, Airflow, Spark on Kubernetes, Yarn, Oozie, Azkaban About us Tesco in Bengaluru is a multi-disciplinary team serving our customers, communities, and planet a little better every day across markets. Our goal is to create a sustainable competitive advantage for Tesco by standardising processes, delivering cost savings, enabling agility through technological solutions, and empowering our colleagues to do even more for our customers. With cross-functional expertise, a wide network of teams, and strong governance, we reduce complexity, thereby offering high-quality services for our customers. Tesco in Bengaluru, established in 2004 to enable standardisation and build centralised capabilities and competencies, makes the experience better for our millions of customers worldwide and simpler for over 3,30,000 colleagues Tesco Technology Today, our Technology team consists of over 5,000 experts spread across the UK, Poland, Hungary, the Czech Republic, and India. In India, our Technology division includes teams dedicated to Engineering, Product, Programme, Service Desk and Operations, Systems Engineering, Security & Capability, Data Science, and other roles. At Tesco, our retail platform comprises a wide array of capabilities, value propositions, and products, essential for crafting exceptional retail experiences for our customers and colleagues across all channels and markets. This platform encompasses all aspects of our operations – from identifying and authenticating customers, managing products, pricing, promoting, enabling customers to discover products, facilitating payment, and ensuring delivery. By developing a comprehensive Retail Platform, we ensure that as customer touchpoints and devices evolve, we can consistently deliver seamless experiences. This adaptability allows us to respond flexibly without the need to overhaul our technology, thanks to the creation of capabilities we have built.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana

On-site

Job Information Date Opened 07/23/2025 Industry Information Technology Job Type Full time Work Experience 5+ years City Hyderabad State/Province Telangana Country India Zip/Postal Code 500039 Job Description Core Responsibilities Design and optimize batch/streaming data pipelines using Scala, Spark, and Kafka Implement real-time tokenization/cleansing microservices in Java Manage production workflows via Apache Airflow (batch scheduling) Conduct root-cause analysis of data incidents using Spark/Dynatrace logs Monitor EMR clusters and optimize performance via YARN/Dynatrace metrics Ensure data security through HashiCorp Vault (Transform Secrets Engine) Validate data integrity and configure alerting systems Requirements Technical Requirements Programming :Scala (Spark batch/streaming), Java (real-time microservices) Big Data Systems: Apache Spark, EMR, HDFS, YARN resource management Cloud & Storage :Amazon S3, EKS Security: HashiCorp Vault, tokenization vs. encryption (FPE) Orchestration :Apache Airflow (batch scheduling) Operational Excellence Spark log analysis, Dynatrace monitoring, incident handling, data validation Mandatory Competencies Expertise in distributed data processing (Spark on EMR/Hadoop) Proficiency in shell scripting and YARN job management Ability to implement format-preserving encryption (tokenization solutions) Experience with production troubleshooting (executor logs, metrics, RCA) Benefits Benefits Insurance - Family Term Insurance PF Paid Time Off - 20 days Holidays - 10 days Flexi timing Competitive Salary Diverse & Inclusive workspace

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana

On-site

About the Role: Grade Level (for internal use): 10 The Impact: The work you do will be used every single day, it’s the essential code you’ll write that provides the data and analytics required for crucial, daily decisions in the markets. What’s in it for you: Build a career with a global company Work on code that fuels the global financial markets Grow and improve your skills by working on enterprise level products and new technologies Responsibilities: Identify, prioritize, and execute tasks in Agile software development environment Develop tools and applications by producing clean, high quality and efficient code Develop solutions to develop/support key business needs Engineer components and common services based on standard development models, languages, and tools Produce system design documents and participate actively in technical walkthroughs Build and maintain a data environment for speed, accuracy, consistency and ‘up’ time Ensure data governance principles adopted, data quality checks and data lineage implemented in each hop of the data Collaborate effectively with technical and non-technical partners As a team-member should continuously improve the architecture What We’re Looking For: Basic Qualifications Bachelor's/Master’s Degree in Computer Science, Information Systems or equivalent. Minimum 6+ years of work experience in Application Development A Java full stack developer with exposure to Spring Framework (Spring Boot, Spring Data, Spring API Gateway), Angular/React. 3+ years of Java object-oriented software development experience developing server-side components in a near real-time large-scale enterprise environment. 3+ years of SOA development experience with strong skills in Spring Boot, JMS, JPA, Hibernate, Web Services (SOAP & RESTful) Experience on Serverless technologies & Containerization using Container platforms and Container orchestration systems. Proficient in building applications using API Gateway, Service registry , Service Discovery & Circuit Breaker pattern. Experience with working on AWS Familiarity with Big Data technologies such as Big data processing engines, Hive, Scala, EMR. Nice to Have Knowledge of Python, Java Script and React Knowledge of streaming systems such as Distributed streaming platform and Big data processing engines streaming is a plus. Must be a quick learner to evaluate and embrace new technologies in the Big data space. Excellent written and verbal communication skills. Good collaboration skills. Ability to lead, train and mentor What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 318143 Posted On: 2025-07-23 Location: Hyderabad, Telangana, India

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana

On-site

Senior Software Engineer Hyderabad, India Information Technology 318143 Job Description About The Role: Grade Level (for internal use): 10 The Impact: The work you do will be used every single day, it’s the essential code you’ll write that provides the data and analytics required for crucial, daily decisions in the markets. What’s in it for you: Build a career with a global company Work on code that fuels the global financial markets Grow and improve your skills by working on enterprise level products and new technologies Responsibilities: Identify, prioritize, and execute tasks in Agile software development environment Develop tools and applications by producing clean, high quality and efficient code Develop solutions to develop/support key business needs Engineer components and common services based on standard development models, languages, and tools Produce system design documents and participate actively in technical walkthroughs Build and maintain a data environment for speed, accuracy, consistency and ‘up’ time Ensure data governance principles adopted, data quality checks and data lineage implemented in each hop of the data Collaborate effectively with technical and non-technical partners As a team-member should continuously improve the architecture What We’re Looking For: Basic Qualifications Bachelor's/Master’s Degree in Computer Science, Information Systems or equivalent. Minimum 6+ years of work experience in Application Development A Java full stack developer with exposure to Spring Framework (Spring Boot, Spring Data, Spring API Gateway), Angular/React. 3+ years of Java object-oriented software development experience developing server-side components in a near real-time large-scale enterprise environment. 3+ years of SOA development experience with strong skills in Spring Boot, JMS, JPA, Hibernate, Web Services (SOAP & RESTful) Experience on Serverless technologies & Containerization using Container platforms and Container orchestration systems. Proficient in building applications using API Gateway, Service registry , Service Discovery & Circuit Breaker pattern. Experience with working on AWS Familiarity with Big Data technologies such as Big data processing engines, Hive, Scala, EMR. Nice to Have Knowledge of Python, Java Script and React Knowledge of streaming systems such as Distributed streaming platform and Big data processing engines streaming is a plus. Must be a quick learner to evaluate and embrace new technologies in the Big data space. Excellent written and verbal communication skills. Good collaboration skills. Ability to lead, train and mentor What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 318143 Posted On: 2025-07-23 Location: Hyderabad, Telangana, India

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka

On-site

- 2+ years of processing data with a massively parallel technology (such as Redshift, Teradata, Netezza, Spark or Hadoop based big data solution) experience - 2+ years of relational database technology (such as Redshift, Oracle, MySQL or MS SQL) experience - 2+ years of developing and operating large-scale data structures for business intelligence analytics (using ETL/ELT processes) experience - 5+ years of data engineering experience - Experience managing a data or BI team - Experience communicating to senior management and customers verbally and in writing - Experience leading and influencing the data or BI strategy of your team or organization - Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS As a Data Engineering Manager, you will lead a team of data engineers, front end engineers and business intelligence engineers. You will own our internal data products (Yoda), transform to AI, build agents and scale them for IN and emerging stores. You will provide technical leadership, drive application and data engineering initiatives and build end-to-end data solutions that are highly available, scalable, stable, secure, and cost-effective. You strive for simplicity, demonstrate creativity with sound judgement. You deliver data & reporting solutions that are customer focused, easy to consume and create business impact. You are passionate about working with huge datasets and have experience with the organization and curation of data for analytics. You have a strategic and long-term view on architecture of advanced data eco systems. You are experienced in building efficient and scalable data services and have the ability to integrate data systems with AWS tools and services to support a variety of customer use cases/applications Key job responsibilities • Lead a team of data engineers, front end engineers and business intelligence engineers to deliver cross-functional, data and application engineering projects for Databases, Analytics and AI/ML services, • Establish and clearly communicate organizational vision, goals and success measures, • Collaborate with business stakeholders to develop roadmap and product requirements, • Build, Own, Prioritize, Lead and Deliver a roadmap of large and complex multi-functional projects and programs, • Manage AWS infrastructure, IMR cost and RDS/Dynamo instances • Interface with other technology teams to extract, transform, and load data from a wide variety of data sources, • Own the design, development, and maintenance of metrics, reports, dashboards, etc. to drive key business decisions. About the team CoBRA is the Central BI Reporting and Analytics org for IN stores and AI partner for International emerging stores . CoBRA team's mission is to empower Category and Seller orgs including Brand, Account, marketing and product/program teams with self-service products using AI (Yoda and bedrock agents), build actionable insights (Quicksight Q, Cutstom agents, Q- business) and help them make faster and smart decisions using science solutions across Amazon fly wheel on all inputs (Selection, Pricing and Speed). Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with AWS Tools and Technologies (Redshift, S3, EC2) Knowledge of building AI tools, AWS bedrock agents, LLM/foundational models Experience in supporting ML models for data needs Exposure to prompt engineering and upcoming AI technologies and its landscape Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

Remote

About Company :Our Client is a multinational IT services and consulting company headquartered in USA, With revenues 19.7 Billion USD, with Global work force of 3,50,000 and Listed in NASDAQ, It is one of the leading IT services firms globally, known for its work in digital transformation, technology consulting, and business process outsourcing, Business Focus on Digital Engineering, Cloud Services, AI and Data Analytics, Enterprise Applications ( SAP, Oracle, Sales Force ), IT Infrastructure, Business Process Out Source. Major delivery centers in India, including cities like Chennai, Pune, Hyderabad, and Bengaluru. Offices in over 35 countries. India is a major operational hub, with as its U.S. headquarters. · Job Title : Big Data Testing · Location: Pune · Experience: 6+ Years Work Mode ( WFO/Remote/Hybrid) : Hybrid · Job Type : Contract to hire . · Notice Period:- Immediate joiners . · Detailed JD Core Skills: Technical skills – Proficient SQL expertise , proven experience on developing solutions on any One programming language, Data Testing: ETL testing, data validation, transformation checks, SQL-based testing, Hands On experience - Java/ Python Good To have : Automation experience on Backend, Scripting: Python/ Java/Scala/Pyspark, Databricks usage, Azure Knowledge

Posted 2 weeks ago

Apply

6.0 - 9.0 years

8 - 15 Lacs

Mumbai, Navi Mumbai, Pune

Work from Office

Roles and Responsibilities : Design, develop, test, deploy and maintain large-scale data processing pipelines using Scala programming language. Collaborate with cross-functional teams to identify business requirements and design solutions that meet those needs. Develop high-quality code that is efficient, scalable, and easy to maintain. Troubleshoot issues related to data processing workflows and optimize system performance. Job Requirements : 6-9 years of experience in developing applications using Scala programming language. Strong understanding of Spark SQL concepts for big data analysis. Experience working with relational databases (e.g., MySQL) for storing and retrieving large datasets.

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

To get the best candidate experience, please consider applying for a maximum of 3 roles within 12 months to ensure you are not duplicating efforts. Job Category Software Engineering Job Details About Salesforce We’re Salesforce, the Customer Company, inspiring the future of business with AI+ Data +CRM. Leading with our core values, we help companies across every industry blaze new trails and connect with customers in a whole new way. And, we empower you to be a Trailblazer, too — driving your performance and career growth, charting new paths, and improving the state of the world. If you believe in business as the greatest platform for change and in companies doing well and doing good – you’ve come to the right place. Role Description Salesforce has immediate opportunities for software developers who want their lines of code to have significant and measurable positive impact for users, the company's bottom line, and the industry. You will be working with a group of world-class engineers to build the breakthrough features our customers will love, adopt, and use while keeping our trusted CRM platform stable and scalable. The software engineer role at Salesforce encompasses architecture, design, implementation, and testing to ensure we build products right and release them with high quality. We pride ourselves on writing high quality, maintainable code that strengthens the stability of the product and makes our lives easier. We embrace the hybrid model and celebrate the individual strengths of each team member while cultivating everyone on the team to grow into the best version of themselves. We believe that autonomous teams with the freedom to make decisions will empower the individuals, the product, the company, and the customers they serve to thrive. Your Impact As a Backend Software Engineer, your job responsibilities will include: Build new and exciting components in an ever-growing and evolving market technology to provide scale and efficiency. Develop high-quality, production-ready code that millions of users of our cloud platform can use. Design, implement, and tune robust APIs and API framework-related features that perform and scale in a multi-tenant environment. Work in a Hybrid Engineering model and contribute to all phases of SDLC including design, implementation, code reviews, automation, and testing of the features. Build efficient components/algorithms on a microservice multi-tenant SaaS cloud environment Code review, mentoring junior engineers, and providing technical guidance to the team (depending on the seniority level) Required Skills Mastery of multiple programming languages and platforms 3 + years of software development experience Deep knowledge of object-oriented programming and other scripting languages: Java, Python, Scala C#, Go, Node.JS and C++. Strong SQL skills and experience and experience with relational and non-relational databases e.g. (Postgress/Trino/redshift/Mongo). Experience with developing SAAS products over public cloud infrastructure - AWS/Azure/GCP. Proven experience designing and developing distributed systems at scale. A deeper understanding of software development best practices and demonstrate leadership skills. Degree or equivalent relevant experience required. Experience will be evaluated based on the core competencies for the role (e.g. extracurricular leadership roles, military experience, volunteer roles, work experience, etc.) Preferred Skills Experience with Big-Data/ML and S3 Hands-on experience with Streaming technologies like Kafka Experience with Elastic Search Experience with Terraform, Kubernetes, Docker Experience working in a high-paced and rapidly growing multinational organization Benefits & Perks Comprehensive benefits package including well-being reimbursement, generous parental leave, adoption assistance, fertility benefits, and more! World-class enablement and on-demand training with Trailhead.com Exposure to executive thought leaders and regular 1:1 coaching with leadership Volunteer opportunities and participation in our 1:1:1 model for giving back to the community For more details, visit https://www.salesforcebenefits.com/ Accommodations If you require assistance due to a disability applying for open positions please submit a request via this Accommodations Request Form. Posting Statement Salesforce is an equal opportunity employer and maintains a policy of non-discrimination with all employees and applicants for employment. What does that mean exactly? It means that at Salesforce, we believe in equality for all. And we believe we can lead the path to equality in part by creating a workplace that’s inclusive, and free from discrimination. Know your rights: workplace discrimination is illegal. Any employee or potential employee will be assessed on the basis of merit, competence and qualifications – without regard to race, religion, color, national origin, sex, sexual orientation, gender expression or identity, transgender status, age, disability, veteran or marital status, political viewpoint, or other classifications protected by law. This policy applies to current and prospective employees, no matter where they are in their Salesforce employment journey. It also applies to recruiting, hiring, job assignment, compensation, promotion, benefits, training, assessment of job performance, discipline, termination, and everything in between. Recruiting, hiring, and promotion decisions at Salesforce are fair and based on merit. The same goes for compensation, benefits, promotions, transfers, reduction in workforce, recall, training, and education.

Posted 2 weeks ago

Apply

7.0 - 11.0 years

0 Lacs

maharashtra

On-site

As a Scala Developer at our company, you will play a crucial role in designing, building, and enhancing our clients" online platform to ensure optimal performance and reliability. Your responsibilities will include researching, proposing, and implementing cutting-edge technology solutions while adhering to industry best practices and standards. You will be accountable for the resilience and availability of various products and will collaborate closely with a diverse team to achieve collective goals. To excel in this role, we are seeking a highly skilled Scala Developer with over 7 years of experience in crafting scalable and high-performance backend systems. Your expertise in functional programming, familiarity with contemporary data processing frameworks, and proficiency in working within cloud-native environments will be invaluable. You will be tasked with designing, creating, and managing backend services and APIs using Scala, optimizing existing codebases for enhanced performance, scalability, and reliability, and ensuring the development of clean, maintainable, and well-documented code. Collaboration is key in our team, and you will work closely with product managers, frontend developers, and QA engineers to deliver exceptional results. Your role will also involve conducting code reviews, sharing knowledge, and mentoring junior developers to foster a culture of continuous improvement. Experience with technologies such as Akka, Play Framework, and Kafka, as well as integration with SQL/NoSQL databases and external APIs, will be essential in driving our projects forward. Your hands-on experience with Scala and functional programming principles, coupled with your proficiency in RESTful APIs, microservices architecture, and API integration, will be critical in meeting the demands of the role. A solid grasp of concurrency, asynchronous programming, and stream processing, along with familiarity with SQL/NoSQL databases and tools like SBT or Maven, will further enhance your contributions to our team. Exposure to Git, Docker, and CI/CD pipelines, as well as a comfort level in Agile/Scrum environments, will be advantageous. Moreover, your familiarity with Apache Spark, Kafka, or other big data tools, along with experience in cloud platforms like AWS, GCP, or Azure, and an understanding of DevOps practices, will position you as a valuable asset in our organization. Proficiency in testing frameworks such as ScalaTest, Specs2, or Mockito will round out your skill set and enable you to deliver high-quality solutions effectively. In return, we offer a stimulating and innovative work environment where you will have ample opportunities for learning and professional growth. Join us in shaping the future of our clients" online platform and making a tangible impact in the digital realm.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

kochi, kerala

On-site

The ideal candidate ready to join immediately can share their details via email for quick processing at nitin.patil@ust.com. Act swiftly for immediate attention! With over 5 years of experience, the successful candidate will have the following roles and responsibilities: - Designing, developing, and maintaining scalable data pipelines using Spark (PySpark or Spark with Scala). - Constructing data ingestion and transformation frameworks for both structured and unstructured data sources. - Collaborating with data analysts, data scientists, and business stakeholders to comprehend requirements and deliver reliable data solutions. - Handling large volumes of data while ensuring quality, integrity, and consistency. - Optimizing data workflows for enhanced performance, scalability, and cost efficiency on cloud platforms such as AWS, Azure, or GCP. - Implementing data quality checks and automation for ETL/ELT pipelines. - Monitoring and troubleshooting data issues in production environments and conducting root cause analysis. - Documenting technical processes, system designs, and operational procedures. Key Skills Required: - Minimum 3 years of experience as a Data Engineer or in a similar role. - Proficiency with PySpark or Spark using Scala. - Strong grasp of SQL for data querying and transformation purposes. - Previous experience working with any cloud platform (AWS, Azure, or GCP). - Sound understanding of data warehousing concepts and big data architecture. - Familiarity with version control systems like Git. Desired Skills: - Exposure to data orchestration tools such as Apache Airflow, Databricks Workflows, or equivalent. - Knowledge of Delta Lake, HDFS, or Kafka. - Familiarity with containerization tools like Docker/Kubernetes. - Experience with CI/CD practices and familiarity with DevOps principles. - Understanding of data governance, security, and compliance standards.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

As a Database Designer / Senior Data Engineer at VE3, you will be responsible for architecting and designing modern, scalable data platforms on AWS and/or Azure, ensuring best practices for security, cost optimization, and performance. You will develop detailed data models and document data dictionaries and lineage to support data solutions. Additionally, you will build and optimize ETL/ELT pipelines using languages such as Python, SQL, Scala, and services like AWS Glue, Azure Data Factory, and open-source frameworks like Spark and Airflow. Collaboration is key in this role as you will work closely with data analysts, BI teams, and stakeholders to translate business requirements into data solutions and dashboards. You will also partner with DevOps/Cloud Ops to automate CI/CD for data code and infrastructure, ensuring governance, security, and compliance standards such as GDPR and ISO27001 are met. Monitoring, alerting, and data quality frameworks will be implemented to maintain data integrity. As a mentor, you will guide junior engineers and stay updated on emerging big data and streaming technologies to enhance our toolset. The ideal candidate should have a Bachelor's degree in Computer Science, Engineering, IT, or similar field with at least 3 years of hands-on experience in a Database Designer / Data Engineer role within a cloud environment. Technical skills required include expertise in SQL, proficiency in Python or Scala, and familiarity with cloud services like AWS (Glue, S3, Kinesis, RDS) or Azure (Data Factory, Data Lake Storage, SQL Database). Strong communication skills are essential, along with an analytical mindset to address performance bottlenecks and scaling challenges. A collaborative attitude in agile/scrum settings is highly valued. Nice to have qualifications include certifications in AWS or Azure data analytics, exposure to data science workflows, experience with containerized workloads, and familiarity with DataOps practices and tools. At VE3, we are committed to fostering a diverse and inclusive environment where every voice is heard, and every idea can contribute to tomorrow's breakthrough.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

At PwC, the focus in data and analytics is on leveraging data to drive insights and make informed business decisions. Advanced analytics techniques are utilized to help clients optimize their operations and achieve strategic goals. In data analysis at PwC, the emphasis is on utilizing advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. Skills in data manipulation, visualization, and statistical modeling are leveraged to support clients in solving complex business problems. PwC US - Acceleration Center is currently looking for a highly skilled and experienced GenAI Data Scientist to join the team at the Senior Associate level. As a GenAI Data Scientist, the critical role involves developing and implementing machine learning models and algorithms for GenAI projects. The ideal candidate should have a strong background in data science, with a focus on GenAI technologies, and possess a solid understanding of statistical analysis, machine learning, data visualization, and application programming. Candidates with 4+ years of hands-on experience are preferred for this position. Responsibilities: - Collaborate with cross-functional teams to understand business requirements and identify opportunities for applying GenAI technologies. - Develop and implement machine learning models and algorithms for GenAI projects. - Perform data cleaning, preprocessing, and feature engineering to prepare data for analysis. - Collaborate with data engineers to ensure efficient data processing and integration into machine learning pipelines. - Validate and evaluate model performance using appropriate metrics and techniques. - Develop and deploy production-ready machine learning applications and solutions. - Utilize object-oriented programming skills to build robust and scalable software components. - Utilize Kubernetes for container orchestration and deployment. - Design and build chatbots using GenAI technologies. - Communicate findings and insights to stakeholders through data visualizations, reports, and presentations. - Stay up-to-date with the latest advancements in GenAI technologies and recommend innovative solutions to enhance data science processes. Requirements: - 3-5 years of relevant technical/technology experience, with a focus on GenAI projects. - Strong programming skills in languages such as Python, R, or Scala. - Proficiency in machine learning libraries and frameworks such as TensorFlow, PyTorch, or scikit-learn. - Experience with data preprocessing, feature engineering, and data wrangling techniques. - Solid understanding of statistical analysis, hypothesis testing, and experimental design. - Familiarity with cloud computing platforms such as AWS, Azure, or Google Cloud. - Knowledge of data visualization tools and techniques. - Strong problem-solving and analytical skills. - Excellent communication and collaboration abilities. - Ability to work in a fast-paced and dynamic environment. Preferred Qualifications: - Experience with object-oriented programming languages such as Java, C++, or C#. - Experience with developing and deploying machine learning applications in production environments. - Understanding of data privacy and compliance regulations. - Relevant certifications in data science or GenAI technologies. Nice To Have Skills: - Experience with Azure AI Search, Azure Doc Intelligence, Azure OpenAI, AWS Textract, AWS Open Search, AWS Bedrock. - Familiarity with LLM backed agent frameworks such as Autogen, Langchain, Semantic Kernel, etc. - Experience in chatbot design and development. Professional And Educational Background: Any graduate /BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Masters Degree /MBA,

Posted 2 weeks ago

Apply

7.0 - 11.0 years

18 - 30 Lacs

Bengaluru

Hybrid

We do have the opening for the role of Big Data Developer with an MNC. Mandatory Skill CRM Web UI Framework including Component Workbench Hands-On experience in BOL-GENIL programming Knowledge on 1-Order Framework including APIs Involved in an SAP CRM EHP upgrade ABAP Objects, Workflows, BAPIs, BADIs, Report programming Experience: 7- 11 years Location: Bangalore (Whitefiled) Notice Period: 0- 30 Days Work Mode: hybrid (3 Days work from office)

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies