Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
15.0 - 20.0 years
5 - 9 Lacs
Mumbai
Work from Office
Location Mumbai Role Overview : As a Big Data Engineer, you'll design and build robust data pipelines on Cloudera using Spark (Scala/PySpark) for ingestion, transformation, and processing of high-volume data from banking systems. Key Responsibilities : Build scalable batch and real-time ETL pipelines using Spark and Hive Integrate structured and unstructured data sources Perform performance tuning and code optimization Support orchestration and job scheduling (NiFi, Airflow) Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience3–15 years Proficiency in PySpark/Scala with Hive/Impala Experience with data partitioning, bucketing, and optimization Familiarity with Kafka, Iceberg, NiFi is a must Knowledge of banking or financial datasets is a plus
Posted 1 week ago
3.0 - 6.0 years
14 - 18 Lacs
Bengaluru
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Strong and proven background in Information Technology & working knowledge of .NET Core, C#, REST API, LINQ, Entity Framework, XUnit. Troubleshooting issues related to code performance. Working knowledge of Angular 15 or later, Typescript, Jest Framework, HTML 5 and CSS 3 & MS SQL Databases, troubleshooting issues related to DB performance Good understanding of CQRS, mediator, repository pattern. Good understanding of CI/CD pipelines and SonarQube & messaging and reverse proxy Preferred technical and professional experience Good understanding of AuthN and AuthZ techniques like (windows, basic, JWT). Good understanding of GIT and it’s process like Pull request. Merge, pull, commit Methodology skills like AGILE, TDD, UML
Posted 1 week ago
5.0 - 10.0 years
14 - 18 Lacs
Bengaluru
Work from Office
As an Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise We are seeking a skilled Azure Data Engineer with 5+ years of experience Including 3+ years of hands-on experience with ADF/Databricks The ideal candidate Data bricks,Data Lake, Phyton programming skills. The candidate will also have experience for deploying to data bricks. Familiarity with Azure Data Factory Preferred technical and professional experience Good communication skills. 3+ years of experience with ADF/DB/DataLake. Ability to communicate results to technical and non-technical audiences
Posted 1 week ago
4.0 - 7.0 years
14 - 17 Lacs
Gurugram
Work from Office
A Data Engineer specializing in enterprise data platforms, experienced in building, managing, and optimizing data pipelines for large-scale environments. Having expertise in big data technologies, distributed computing, data ingestion, and transformation frameworks. Proficient in Apache Spark, PySpark, Kafka, and Iceberg tables, and understand how to design and implement scalable, high-performance data processing solutions.What you’ll doAs a Data Engineer – Data Platform Services, responsibilities include: Data Ingestion & Processing Designing and developing data pipelines to migrate workloads from IIAS to Cloudera Data Lake. Implementing streaming and batch data ingestion frameworks using Kafka, Apache Spark (PySpark). Working with IBM CDC and Universal Data Mover to manage data replication and movement. Big Data & Data Lakehouse Management Implementing Apache Iceberg tables for efficient data storage and retrieval. Managing distributed data processing with Cloudera Data Platform (CDP). Ensuring data lineage, cataloging, and governance for compliance with Bank/regulatory policies. Optimization & Performance Tuning Optimizing Spark and PySpark jobs for performance and scalability. Implementing data partitioning, indexing, and caching to enhance query performance. Monitoring and troubleshooting pipeline failures and performance bottlenecks. Security & Compliance Ensuring secure data access, encryption, and masking using Thales CipherTrust. Implementing role-based access controls (RBAC) and data governance policies. Supporting metadata management and data quality initiatives. Collaboration & Automation Working closely with Data Scientists, Analysts, and DevOps teams to integrate data solutions. Automating data workflows using Airflow and implementing CI/CD pipelines with GitLab and Sonatype Nexus. Supporting Denodo-based data virtualization for seamless data access Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 4-7 years of experience in big data engineering, data integration, and distributed computing. Strong skills in Apache Spark, PySpark, Kafka, SQL, and Cloudera Data Platform (CDP). Proficiency in Python or Scala for data processing. Experience with data pipeline orchestration tools (Apache Airflow, Stonebranch UDM). Understanding of data security, encryption, and compliance frameworks Preferred technical and professional experience Experience in banking or financial services data platforms. Exposure to Denodo for data virtualization and DGraph for graph-based insights. Familiarity with cloud data platforms (AWS, Azure, GCP). Certifications in Cloudera Data Engineering, IBM Data Engineering, or AWS Data Analytics
Posted 1 week ago
6.0 - 11.0 years
14 - 17 Lacs
Pune
Work from Office
As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the world’s technology leader. Come to IBM and make a global impact Responsibilities: Responsible to manage end to end feature development and resolve challenges faced in implementing the same Learn new technologies and implement the same in feature development within the time frame provided Manage debugging, finding root cause analysis and fixing the issues reported on Content Management back end software system fixing the issues reported on Content Management back-end software system Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall, more than 6 years of experience with more than 4+ years of Strong Hands on experience in Python and Spark Strong technical abilities to understand, design, write and debug to develop applications on Python and Pyspark. Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure strong problem-solving skill Preferred technical and professional experience Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure
Posted 1 week ago
4.0 - 8.0 years
2 - 5 Lacs
Pune, Haveli
Work from Office
As an experienced member of our Core banking Base Development / Professional Service Group, you will be responsible for effective Microservice development in Scala and delivery of our NextGen transformation / professional services projects/programs. What You Will Do: Adhere the processes followed for development in the program. Report status, and proactively identify issues to the Tech Lead and management team. Personal ownership and accountability for delivering assigned tasks and deliverableswithin the established schedule. Facilitate a strong and supportive team environment that enables the team as well asindividual team members to overcome any political, bureaucratic and/or resourcebarriers to participation. Recommend and Implement solutions. Be totally hands on and have the ability towork independently. What You Will Need to Have: 4 to 8 yearsof recent hands-on in Scala and Akka Framework Technical Skillset required o Should possess Hands-on experience in Scala development including AkkaFramework.o Must have good understanding on Akka Streams.o Test driven development.o Awareness on message broker.o Hands-on Experience in design and development of Microservices.o Good awareness on Event driven Microservices Architecture.o GRPC Protocol + Protocol Buffers.o Hands-on Experience in Docker Containers.o Hands-on Experience in Kubernetes.o Awareness on cloud native applications.o Jira, Confluence, Ansible, Terraform.o Good knowledge of the cloud platforms (preferably AWS), their IaaS, PaaS,SaaS solutions.o Good knowledge and hands on experience on the scripting languages likeBatch, Bash, hands on experience on Python would be a plus.o Knowledge of Integration and unit test and Behavior Driven Developmento Need to have good problem-solving skills.o Good communication skills.What Would Be Great to Have: Experience integrating to third party applications. Agile knowledge Good understanding of the configuration management Financial Industry and Core Banking integration experience
Posted 1 week ago
1.0 - 3.0 years
3 - 7 Lacs
Chennai
Hybrid
Strong experience in Python Good experience in Databricks Experience working in AWS/Azure Cloud Platform. Experience working with REST APIs and services, messaging and event technologies. Experience with ETL or building Data Pipeline tools Experience with streaming platforms such as Kafka. Demonstrated experience working with large and complex data sets. Ability to document data pipeline architecture and design Experience in Airflow is nice to have To build complex Deltalake
Posted 1 week ago
4.0 - 9.0 years
2 - 6 Lacs
Bengaluru
Work from Office
Roles and Responsibilities: 4+ years of experience as a data developer using Python Knowledge in Spark, PySpark preferable but not mandatory Azure Cloud experience (preferred) Alternate Cloud experience is fine preferred experience in Azure platform including Azure data Lake, data Bricks, data Factory Working Knowledge on different file formats such as JSON, Parquet, CSV, etc. Familiarity with data encryption, data masking Database experience in SQL Server is preferable preferred experience in NoSQL databases like MongoDB Team player, reliable, self-motivated, and self-disciplined
Posted 1 week ago
1.0 - 3.0 years
2 - 5 Lacs
Chennai
Work from Office
Mandatory Skills: AWS, Python, SQL, spark, Airflow, SnowflakeResponsibilities Create and manage cloud resources in AWS Data ingestion from different data sources which exposes data using different technologies, such asRDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations Develop an infrastructure to collect, transform, combine and publish/distribute customer data. Define process improvement opportunities to optimize data collection, insights and displays. Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible Identify and interpret trends and patterns from complex data sets Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders. Key participant in regular Scrum ceremonies with the agile teams Proficient at developing queries, writing reports and presenting findings Mentor junior members and bring best industry practices
Posted 1 week ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Minimum qualifications: Bachelor's degree in Computer Science, Engineering, Mathematics, or a related field, or equivalent practical experience. 4 years of experience in developing and troubleshooting data processing algorithms. Experience coding with one or more programming languages (e.g., Java, Python) and Bigdata technologies such as Scala, Spark and hadoop frameworks. Experience with one public cloud provider, such as GCP. Preferred qualifications: Experience architecting, developing software, or internet scale production-grade Big Data solutions in virtualized environments. Experience in Big Data, information retrieval, data mining, or Machine Learning. Experience with data warehouses, technical architectures, infrastructure components, Extract Transform and Load/Extract, Load and Transform and reporting/analytic tools, environments, and data structures. Experience in building multi-tier applications with modern technologies such as NoSQL, MongoDB, SparkML, and TensorFlow. Experience with Infrastructure as Code and Continuous Integration/Continuous Deployment tools like Terraform, Ansible, Jenkins. Understanding one database type, with the ability to write complex SQL queries. About The Job The Google Cloud Platform team helps customers transform and build what's next for their business — all with technology built in the cloud. Our products are developed for security, reliability and scalability, running the full stack from infrastructure to applications to devices and hardware. Our teams are dedicated to helping our customers — developers, small and large businesses, educational institutions and government agencies — see the benefits of our technology come to life. As part of an entrepreneurial team in this rapidly growing business, you will play a key role in understanding the needs of our customers and help shape the future of businesses of all sizes use technology to connect with customers, employees and partners. As a Strategic Cloud Data Engineer, you will guide customers on how to ingest, store, process, analyze, and explore/visualize data on the Google Cloud Platform. You will work on data migrations and modernization projects, and with customers to design data processing systems, develop data pipelines optimized for scaling, and troubleshoot potential platform/product challenges. You will have an understanding of data governance and security controls. You will travel to customer sites to deploy solutions and deliver workshops to educate and empower customers. Additionally, you will work with Product Management and Product Engineering teams to build and constantly drive excellence in our products. Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Interact with stakeholders to translate complex customer requirements into recommendations for appropriate solution architectures and advisory services. Engage with technical leads, and partners to lead high velocity migration and modernisation to Google Cloud Platform (GCP). Design, Migrate/Build and Operationalise data storage and processing infrastructure using Cloud native products. Develop and implement data quality and governance procedures to ensure the accuracy and reliability of data. Take various project requirements and organize them into clear goals and objectives, and create a work breakdown structure to manage internal and external stakeholders. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form . Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job description: Overall more than 5 Yrs of experience in Data projects Good Knowledge of GCP Bigquery SQL Python dataflow skills Has worked in Implementation projects building data pipelines transformation logics and data models Job Title GCP Data Engineer Belongs to Data Management Engineering Education Bachelor of engineering in any disciplineequivalent Desired Candidate Profile Technology Engineering Expertise 4 years of experience in implementing data solutions using GCP BigquerySQL programming Proficient in dealing data access layer RDBMS NOSQL Experience in implementing and deploying Big data applications with GCP Big Data Services Good to have SQL skills Able to deal with diverse set of stakeholders Proficient in articulation communication and presentation High integrity Problem solving skills learning attitude Team player Key Responsibilities Implement data solutions using GCP and need to be familiar in programming with SQLpython Ensure clarity on NFR and implement these requirements Work with Client Technical Manager by understanding customers landscape their IT priorities Lead performance engineering and capacity planning exercises for databases Technology Engineering Expertise 4 years of experience in implementing data pipelines for Data Analytics solutions Experience in solutions using Google Cloud Data Flow Apache Beam Java programming Proficient in dealing data access layer RDBMS NOSQL Experience in implementing and deploying Big data applications with GCP Big Data Services Good to have SQL skills Experience with different development methodologies RUP Scrum XP Soft skills Able to deal with diverse set of stakeholders Proficient in articulation communication and presentation High integrity Problem solving skills learning attitude Team player Skills: Mandatory Skills : GCP Storage,GCP BigQuery,GCP DataProc,GCP Cloud Composer,GCP DMS,Apache airflow,Java,Python,Scala,GCP Datastream,Google Analytics Hub,GCP Workflows,GCP Dataform,GCP Datafusion,GCP Pub/Sub,ANSI-SQL,GCP Dataflow,GCP Data Flow,GCP Cloud Pub/Sub,Big Data Hadoop Ecosystem Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Tarento Tarento is a fast-growing technology consulting company headquartered in Stockholm, with a strong presence in India and clients across the globe. We specialize in digital transformation, product engineering, and enterprise solutions, working across diverse industries including retail, manufacturing, and healthcare. Our teams combine Nordic values with Indian expertise to deliver innovative, scalable, and high-impact solutions. We're proud to be recognized as a Great Place to Work , a testament to our inclusive culture, strong leadership, and commitment to employee well-being and growth. At Tarento, you’ll be part of a collaborative environment where ideas are valued, learning is continuous, and careers are built on passion and purpose. Role Overview An Azure Data Engineer specializing in Databricks is responsible for designing, building, and maintaining scalable data solutions on the Azure cloud platform, with a focus on leveraging Databricks and related big data technologies. The role involves close collaboration with data scientists, analysts, and software engineers to ensure efficient data processing, integration, and delivery for analytics and business intelligence needs245. Key Responsibilities Design, develop, and maintain robust and scalable data pipelines using Azure Databricks, Azure Data Factory, and other Azure services. Build and optimize data architectures to support large-scale data processing and analytics. Collaborate with cross-functional teams to gather requirements and deliver data solutions tailored to business needs. Ensure data quality, integrity, and security across various data sources and pipelines. Implement data governance, compliance, and best practices for data security (e.g., encryption, RBAC). Monitor, troubleshoot, and optimize data pipeline performance, ensuring reliability and scalability. Document technical specifications, data pipeline processes, and architectural decisions Support and troubleshoot data workflows, ensuring consistent data delivery and availability for analytics and reporting Automate data tasks and deploy production-ready code using CI/CD practices Stay updated with the latest Azure and Databricks features, recommending improvements and adopting new tools as appropriate Required Skills And Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field 5+ years of experience in data engineering, with hands-on expertise in Azure and Databricks environments Proficiency in Databricks, Apache Spark, and Spark SQL Strong programming skills in Python and/or Scala Advanced SQL skills and experience with relational and NoSQL databases Experience with ETL processes, data warehousing concepts, and big data technologies (e.g., Hadoop, Kafka) Familiarity with Azure services: Azure Data Lake Storage (ADLS), Azure Data Factory, Azure SQL Data Warehouse, Cosmos DB, Azure Stream Analytics, Azure Functions Understanding of data modeling, schema design, and data integration best practices Strong analytical, problem-solving, and troubleshooting abilities Experience with source code control systems (e.g., GIT) and technical documentation tools Excellent communication and collaboration skills; ability to work both independently and as part of a team Preferred Skills Experience with automation, unit testing, and CI/CD pipelines Certifications in Azure Data Engineering or Databricks are advantageous Soft Skills Flexible, self-starter, and proactive in learning and adopting new technologies Ability to manage multiple priorities and work to tight deadlines Strong stakeholder management and teamwork capabilities Show more Show less
Posted 1 week ago
3.0 - 6.0 years
12 - 22 Lacs
Noida
Work from Office
About CloudKeeper CloudKeeper is a cloud cost optimization partner that combines the power of group buying & commitments management, expert cloud consulting & support, and an enhanced visibility & analytics platform to reduce cloud cost & help businesses maximize the value from AWS, Microsoft Azure, & Google Cloud. A certified AWS Premier Partner, Azure Technology Consulting Partner, Google,Cloud Partner, and FinOps Foundation Premier Member, CloudKeeper has helped 400+ global companies save an average of 20% on their cloud bills, modernize their cloud set-up and maximize value all while maintaining flexibility and avoiding any long-term commitments or cost. CloudKeeper hived off from TO THE NEW, digital technology services company with 2500+ employees and an 8-time GPTW winner. Position Overview: We are looking for an experienced and driven Data Engineer to join our team. The ideal candidate will have a strong foundation in big data technologies, particularly Spark, and a basic understanding of Scala to design and implement efficient data pipelines. As a Data Engineer at CloudKeeper, you will be responsible for building and maintaining robust data infrastructure, integrating large datasets, and ensuring seamless data flow for analytical and operational purposes. Key Responsibilities: Design, develop, and maintain scalable data pipelines and ETL processes to collect, process, and store data from various sources. Work with Apache Spark to process large datasets in a distributed environment, ensuring optimal performance and scalability. Develop and optimize Spark jobs and data transformations using Scala for large-scale data processing. Collaborate with data analysts and other stakeholders to ensure data pipelines meet business and technical requirements. Integrate data from different sources (databases, APIs, cloud storage, etc.) into a unified data platform. Ensure data quality, consistency, and accuracy by building robust data validation and cleansing mechanisms. Use cloud platforms (AWS, Azure, or GCP) to deploy and manage data processing and storage solutions. Automate data workflows and tasks using appropriate tools and frameworks. Monitor and troubleshoot data pipeline performance, optimizing for efficiency and cost-effectiveness. Implement data security best practices, ensuring data privacy and compliance with industry standards. Required Qualifications: 4- 6 years of experience required as a Data Engineer or an equivalent role Strong experience working with Apache Spark with Scala for distributed data processing and big data handling. Basic knowledge of Python and its application in Spark for writing efficient data transformations and processing jobs. Proficiency in SQL for querying and manipulating large datasets.ing technologies. Experience with cloud data platforms, preferably AWS (e.g., S3, EC2, EMR, Redshift) or other cloud-based solutions. Strong knowledge of data modeling, ETL processes, and data pipeline orchestration. Familiarity with containerization (Docker) and cloud-native tools for deploying data solutions. Knowledge of data warehousing concepts and experience with tools like AWS Redshift, Google BigQuery, or Snowflake is a plus. Experience with version control systems such as Git. Strong problem-solving abilities and a proactive approach to resolving technical challenges. Excellent communication skills and the ability to work collaboratively within cross-functional teams.
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Software Engineer II - Full Stack Developer Job Overview The Applications Development Team is a dedicated collection of self-organizing, interdependent, co-located individuals representing different functional roles with all the necessary skills to turn Product Backlog items into a potentially shippable increment within the Sprint / Iteration. Team members may be developers, testers, analysts, architects. The team is cross-functional, which means that between all its members they possess sufficient skills to do the work. There is no dictated leadership hierarchy within the team members. Suitable for a highly skilled Technical Leads with Agile/Scrum experience to work with a team of very experienced developers on some enterprise global application projects Responsible for the creation of a software product as per the definition in the product backlog Do you have what it takes to provide technical leadership for a Scrum team, including coaching and mentoring? Have you got the skills to be recognized as a senior developer in a Scrum team? ROLE Essential Responsibilities Of The Position Work closely with Solution Architect in designing applications, based on TDD (Test Driven Development) Lead the Scrum Team on new technology adoption / processes. Coach and mentor other developers Undertake code reviews of the development team Work on POC and bring knowledge to the team. Provide advice and support to other team members Estimate the size of backlog items that they are responsible for delivering. Translate backlog items into engineering design and logical units of work (tasks) Evaluate technical feasibility Write technical User Stories for backlog Implement sprint backlog items Write unit tests/functional tests/integration tests as per the definition of done for the Scrum team. Write and verify code which adheres to the acceptance criteria Application of product development best practices as per industry standards Support UAT – resolving issues as per business priority Post Implementation Support and production support Undertake regular "brown bag" presentations Ensure we move towards common technical goal All About You Web services & API standards (REST/OAuth/JSON) Programming & Scripting Languages (Java, C++, Scala, JS, Python, Shell*) Application Frameworks (springboot, node.js, vert.x) Web application frameworks (AngularJS, Flask, Spring) Software Architectures (mico-services, event driven, peer-to-peer) Application Security Asynchronous Pub-Sub and Point to Point Messaging Systems Analytical and Problem-solving Ability to operate effectively independently Technical Communication (Written and Oral) Other Experience Desired Postgres, pgAdmin, Spring Batch, Apache Kafka, IntelliJ Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-250507 Show more Show less
Posted 1 week ago
0 years
0 Lacs
Mumbai, Maharashtra, India
Remote
< Back AI Engineer Apply Now Mumbai, Maharashtra, India Job Type Full Time Workspace Remote Requirements About the Role # Job Title: AI Engineer # Location: Mumbai # Company: CreditPe ## Job Description: We are looking for an AI Engineer who will work on implementing AI models into production, integrating with existing systems and services, and creating APIs for external customers. ## Responsibilities: 1. Implement AI models into production. 2. Integrate AI models with existing systems and services. 3. Create APIs for external customers. 4. Work with data scientists to improve performance of AI models. ## Requirements: 1. Proven experience as an AI Engineer or similar role. 2. Understanding of machine learning algorithms and libraries. 3. Proficient in Python and familiar with Scala, Java or C++. 4. Familiar with machine learning frameworks. 5. Good communication skills. ## How to Apply: Interested candidates can send their resume to careers@creditpeclub.com. About The Company # Introduction to CreditPe CreditPe is a cutting-edge Software as a Service (SaaS) platform that specializes in Business-to-Business (B2B) lending for Indian businesses. Our mission is to empower businesses by providing them with easy, fast, and reliable financial services. # What We Do At CreditPe, we understand the unique challenges faced by startups, Micro, Small, and Medium Enterprises (MSMEs), and larger enterprises in India. That's why we've designed our platform to cater specifically to their needs. ## Lending for Startups We believe in the potential of Indian startups and are committed to supporting them in their growth journey. Our lending solutions for startups are designed to provide them with the capital they need to scale their operations, invest in research and development, and drive innovation. ## Lending for MSMEs MSMEs are the backbone of the Indian economy, and at CreditPe, we're dedicated to helping them thrive. Our lending solutions for MSMEs are tailored to help these businesses manage their cash flow, expand their operations, and reach their full potential. ## Lending for Enterprises For larger enterprises, we offer customized lending solutions designed to meet their unique needs. Whether it's for expanding into new markets, investing in new technology, or funding large-scale projects, our enterprise lending solutions provide the financial support that businesses need to succeed. # Why Choose CreditPe? With CreditPe, businesses can expect a seamless and efficient lending experience. Our platform uses advanced technology to simplify the lending process, making it faster and more convenient for businesses to access the funds they need. Plus, with our competitive interest rates and flexible repayment options, businesses can manage their finances with ease and confidence. Join us at CreditPe and experience a new standard in business lending. Apply Now Show more Show less
Posted 1 week ago
0 years
9 - 10 Lacs
Bengaluru
On-site
Ready to shape the future of work? At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that’s shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Assistant Vice President , AI ML Lead ! In this role, we are looking for candidates who have relevant years of experienc e in T ext M ining . The T ext M ining Scientist (TMS) is expected to play a pivotal bridging role between enterprise database teams, and business /functional resources. At a broad level, the TMS will leverage his/her solutioning expertise to translate the customer’s business need into a techno-analytic problem and appropriately work with database teams to bring large scale text analytic solutions to fruition. The right candidate should have prior experience in developing text mining and NLP solutions using open-source tools . Responsibilities Develop transformative AI/ML solutions to address our clients' business requirements and challenges Project Delivery - This would entail successful delivery of projects involving data Pre-processing, Model Training and Evaluation, Parameter Tuning Manage Stakeholder/Customer Expectations Project Blue Printing and Project Documentation Creating Project Plan Understand and research cutting edge industrial and academic developments in AI/ML with NLP/NLU applications in diverse industries such as CPG, Finance etc. Conceptualize, Design, build and develop solution algorithms which demonstrate the minimum required functionality within tight timelines Interact with clients to collect, synthesize, and propose requirements and create effective analytics/text mining roadmap. Work with digital development teams to integrate and transform these algorithms into production quality applications Do applied research on a wide array of text analytics and machine learning projects, file patents and publish the papers Qualifications we seek in you! Minimum Q ualifications / Skills MS in Computer Science, Information systems, or Computer engineering, Systems Engineering with relevant experience in Text Mining / Natural Language Processing (NLP) tools, Data sciences, Big Data and algorithms. Post-Graduation in MBA and Undergraduate degree in any engineering discipline, preferably Computer Science with relevant experience Full cycle experience desirable in atleast 1 Large Scale Text Mining/NLP project from creating a Business use case, Text Analytics assessment/roadmap, Technology & Analytic Solutioning, Implementation and Change Management, considerable experience in Hadoop including development in map-reduce framework Technology Open Source Text Mining paradigms such as NLTK, OpenNLP , OpenCalais , StanfordNLP , GATE, UIMA, Lucene, and cloud based NLU tools such as DialogFlow , MS LUIS Exposure to Statistical Toolkits such as R, Weka, S -Plus, Matlab, SAS-Text Miner Strong Core Java experience in large scale product development and functional knowledge of RDBMs Hands on to programing in the Hadoop ecosystem, and concepts in distributed computing Very good python /R programming skills. Java programming skills a plus Methodology Relevant years of experience in Solutioning & Consulting experience in verticals such as BFSI, CPG, with hands on delivering text analytics on large structured and unstructured data A solid foundation in AI Methodologies like ML, DL, NLP, Neural Networks, Information Retrieval and Extraction, NLG, NLU Exposed to concepts in Natural Language Processing & Statistics, esp., in their application such as Sentiment Analysis, Contextual NLP, Dependency Parsing, Parsing, Chunking, Summarization, etc Demonstrated ability to Conduct look-ahead client research with focus on supplementing and strengthening the client’s analytics agenda with newer tools and techniques Preferred Q ualifications / Skills Technology Expert level of understanding of NLP, NLU and Machine learning/Deep learning methods OpenNLP , OpenCalais , StanfordNLP , GATE, UIMA, Lucene, NoSQL UI development paradigms that would enable Text Mining Insights Visualization, e.g., Adobe Flex Builder, HTML5, CSS3 Linux, Windows, GPU Experience Spark, Scala for distributed computing Deep learning frameworks such as TensorFlow, Keras , Torch, Theano Methodology Social Network modeling paradigms, tools & techniques Text Analytics using Natural Language Processing tools such as Support Vector Machines and Social Network Analysis Previous experience with Text analytics implementations, using open source packages and or SAS-Text Miner Ability to Prioritize, Consultative mindset & Time management skills Why join Genpact? Be a transformation leader – Work at the cutting edge of AI, automation, and digital innovation Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Get hands-on experience, mentorship, and continuous learning opportunities Work with the best – Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Assistant Vice President Primary Location India-Bangalore Schedule Full-time Education Level Master's / Equivalent Job Posting Jun 9, 2025, 2:47:59 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time
Posted 1 week ago
7.0 years
5 - 7 Lacs
Bengaluru
On-site
Company Description The Bosch Group is a leading global supplier of technology and services, in the areas of Automotive Technology, Industrial Technology, Consumer Goods, Energy and Building Technology. In India, the Group operates through nine companies with a combined strength of over 30,000 associates which includes around 14,000 research and development associates. Bosch Automotive Electronics India Pvt. Ltd. (RBAI) is a 100% subsidiary of Robert Bosch GmbH. RBAI was established at the right time to cater to the demands of future Indian market. Established in 2009, started out with manufacturing Electronic Control Units. On an average adding one new product every year, Antenna and Immobilizer in 2011, wide range of BCM's since 2012, Electronic power steering control units from 2013, and Voltage regulator in 2014. Over the last 7 years of its existence, the company has grown over 44% CAGR, which is remarkable considering it was established during the peak of recession. The product portfolio of Bosch Automotive Electronics Pvt. Ltd. is into both Automotive and Non-Automotive Business catering to local as well as global demands. The products from RBAI fulfils 94% of the local demand. Apart from this, 72% of our sales are towards exports covering most of the global market. We invite promising and dynamic professionals for a long-term and rewarding career with Bosch. Job Description Job Overview: As a Scala Developer in our team, you work with large scale manufacturing data coming from our globally distributed plants. You will focus on building efficient, scalable & data-driven applications that – among other use cases – connect IoT devices, pre-process, standardize or enrich data, feed ML models or generate alerts for shopfloor operators. The data sets produced by these applications – whether data streams or data at rest – need to be highly available, reliable, consistent and quality-assured so that they can serve as input to wide range of other use cases and downstream applications. We run these applications on a Kubernetes based, edge data platform in our plants. The platform is currently in ramp-up phase, so apart from building applications, you will also contribute to scaling the platform including topics such as automation and observability. Finally, you are expected to interact with customers and other technical teams e.g. for requirements clarification & definition of data models. Qualifications Bachelor’s degree in computer science, Computer Engineering, relevant technical field, or equivalent; Master’s degree preferred. 5 years of experience in software engineering and / or backend development Additional Information Key Competencies: Required Skills: Develop, deploy and operate data processing applications running on Kubernetes written in Scala (we leverage Kafka for messaging, KStreams and ZIO for data processing, PostgreSQL and S3 for storage) Contribute to ramp-up of our edge data processing platform incl. topics such as deployment automation, building CI/CD pipelines (we use Github Actions + ArgoCD) and evaluation of platform extensions Experience developing software in a JVM-based language. Scala preferred, but Java, Kotlin or Clojure also accepted. Experience with data-driven backend software development Experience with object-oriented & functional programming principles Deep level of understanding in distributed systems for data storage and processing (e.g. Kafka ecosystem, Flink, HDFS, S3) Experience with RDBMS (e.g. Postgres) (optional) prior experience with functional stream processing libraries such as fs2, zio-streams or Akka/Pekko streams Excellent software engineering skills (i.e., data structures & algorithms, software design) Excellent problem-solving, investigative, and troubleshooting skills Experience with CI/CD tools such as Jenkins or Github Actions Comfortable with Linux and scripting languages for workflow automation Discuss requirements with stakeholders such as customers or up- and downstream development teams Derive design proposals including meaningful data models Engage in design discussions with team members, architects & technical leadership Review code contributed by other team members Depending on experience, mentor junior team members Soft Skills: Good Communication Skills Ability to coach and Guide young Data Engineers Decent Level in English as Business Language
Posted 1 week ago
0 years
0 Lacs
Bengaluru
On-site
Req ID: 326913 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Data Engineer to join our team in Bangalore, Karnātaka (IN-KA), India (IN). I'm currently looking for a skilled Data Engineer to join our team! If you're passionate about building data pipelines, optimizing ETL processes, and working with cutting-edge technologies, this could be a great fit for you. Tech Stack: Terraform on AWS, along with Spark and Scala ✅ Strong SQL & Python skills ✅ Experience with cloud platforms (AWS/Azure/GCP) ✅ Prior experience inthe banking/finance domain is a plus! About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us. This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here. If you'd like more information on your EEO rights under the law, please click here. For Pay Transparency information, please click here.
Posted 1 week ago
7.0 years
0 Lacs
Bengaluru
On-site
Company Description The Bosch Group is a leading global supplier of technology and services, in the areas of Automotive Technology, Industrial Technology, Consumer Goods, Energy and Building Technology. In India, the Group operates through nine companies with a combined strength of over 30,000 associates which includes around 14,000 research and development associates. Bosch Automotive Electronics India Pvt. Ltd. (RBAI) is a 100% subsidiary of Robert Bosch GmbH. RBAI was established at the right time to cater to the demands of future Indian market. Established in 2009, started out with manufacturing Electronic Control Units. On an average adding one new product every year, Antenna and Immobilizer in 2011, wide range of BCM's since 2012, Electronic power steering control units from 2013, and Voltage regulator in 2014. Over the last 7 years of its existence, the company has grown over 44% CAGR, which is remarkable considering it was established during the peak of recession. The product portfolio of Bosch Automotive Electronics Pvt. Ltd. is into both Automotive and Non-Automotive Business catering to local as well as global demands. The products from RBAI fulfils 94% of the local demand. Apart from this, 72% of our sales are towards exports covering most of the global market. We invite promising and dynamic professionals for a long-term and rewarding career with Bosch. Job Description As a Data engineer in Operations, you will work on the operational management, monitoring, and support of scalable data pipelines running in Azure Databricks, Hadoop and Radium. You will ensure the reliability, performance, and availability of data workflows and maintain production environments. You will collaborate closely with data engineers, architects, and platform teams to implement best practices in data pipeline operations and incident management to ensure data availability and data completeness. Primary responsibilities: Operational support and incident management for Azure Databricks, Hadoop, Radium data pipelines. Collaborating with data engineering and platform teams to define and enforce operational standards, SLAs, and best practices. Designing and implementing monitoring, alerting, and logging solutions for Azure Databricks pipelines. Coordinating with central teams to ensure compliance with organizational operational standards and security policies. Developing and maintaining runbooks, SOPs, and troubleshooting guides for pipeline issues. Managing the end-to-end lifecycle of data pipeline incidents, including root cause analysis and remediation. Overseeing pipeline deployments, rollbacks, and change management using CI/CD tools such as Azure DevOps. Ensuring data quality and validation checks are effectively monitored in production. Working closely with platform and infrastructure teams to address pipeline and environment-related issues. Providing technical feedback and mentoring junior operations engineers. Conducting peer reviews of operational scripts and automation code. Automating manual operational tasks using Scala and Python scripts. Managing escalations and coordinating critical production issue resolution. Participating in post-mortem reviews and continuous improvement initiatives for data pipeline operations. · Qualifications Bachelor’s degree in Computer Science, Computer Engineering, or a relevant technical field 3+ years’ experience in data engineering, ETL tools, and working with large-scale data sets in Operations. Proven experience with cloud platforms, particularly Azure Databricks. Minimum 3 years of hands-on experience working with distributed cluster environments (e.g., Spark clusters). Strong operational experience in managing and supporting data pipelines in production environments. Additional Information Key Competencies: Experience in Azure Databricks operations or data pipeline support. Understanding of Scala/ Python programming for troubleshooting in Spark environments. Hands-on experience with Delta Lake, Azure Data Lake Storage (ADLS), DBFS, Azure Data Factory (ADF). Solid understanding of distributed data processing frameworks and streaming data operations. Understanding and hands-on usage of Kafka as message broker Experience with Azure SQL Database and cloud-based data services. Strong skills in monitoring tools like Splunk, ELK and Grafana, alerting frameworks, and incident management. Experience working with CI/CD pipelines using Azure DevOps or equivalent. Excellent problem-solving, investigative, and troubleshooting skills in large-scale data environments. Experience defining operational SLAs and implementing proactive monitoring solutions. Familiarity with data governance, security, and compliance best practices in cloud data platforms. Strong communication skills and ability to work independently under pressure. Soft Skills: Good Communication Skills, extensive usage of MS-Teams Experience in using Azure board and JIRA Decent Level in English as Business Language
Posted 1 week ago
4.0 years
10 - 17 Lacs
India
On-site
We are looking for an Only immediate joiner and e*xperienced Big Data Developer with a strong background in PySpark, Python/Scala, Spark, SQL, and the Hadoop ecosystem. The ideal candidate should have over 4 years of experience and be ready to join immediately.* This role requires hands-on expertise in big data technologies and the ability to design and implement robust data processing solutions. Key Responsibilities: Design, develop, and optimize large-scale data processing pipelines using PySpark. Work with various Apache tools and frameworks (like Hadoop, Hive, HDFS, etc.) to ingest, transform, and manage large datasets. Ensure high performance and reliability of ETL jobs in production. Collaborate with Data Scientists, Analysts, and other stakeholders to understand data needs and deliver robust data solutions. Implement data quality checks and data lineage tracking for transparency and auditability. Work on data ingestion, transformation, and integration from multiple structured and unstructured sources. Leverage Apache NiFi for automated and repeatable data flow management (if applicable). Write clean, efficient, and maintainable code in Python and Java. Contribute to architectural decisions, performance tuning, and scalability planning. Required Skills: 5–7 years of experience. Strong hands-on experience with PySpark for distributed data processing. Deep understanding of Apache ecosystem (Hadoop, Hive, Spark, HDFS, etc.). Solid grasp of data warehousing, ETL principles, and data modeling. Experience working with large-scale datasets and performance optimization. Familiarity with SQL and NoSQL databases. Proficiency in Python and basic to intermediate knowledge of Java. Experience in using version control tools like Git and CI/CD pipelines. Nice-to-Have Skills: Working experience with Apache NiFi for data flow orchestration. Experience in building real-time streaming data pipelines. Knowledge of cloud platforms like AWS, Azure, or GCP. Familiarity with containerization tools like Docker or orchestration tools like Kubernetes. Soft Skills: Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Self-driven with the ability to work independently and as part of a team. Education: Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field. Job Type: Full-time Pay: ₹1,000,000.00 - ₹1,700,000.00 per year Benefits: Health insurance Schedule: Day shift Supplemental Pay: Performance bonus Yearly bonus Ability to commute/relocate: Basavanagudi, Bengaluru, Karnataka: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): Are you ready to join within 15 days? What is your Current CTC ? Experience: Python: 4 years (Preferred) Pyspark: 4 years (Required) Data warehouse: 4 years (Required) Work Location: In person Application Deadline: 12/06/2025
Posted 1 week ago
5.0 years
5 - 7 Lacs
Bengaluru
On-site
Bengaluru Office, India Professional Services/Full time/Onsite As an ETL Developer for the Data and Analytics team, you will work within a Professional Services team to support our customer’s data platform on Guidewire Cloud. You will also support the development of new tooling and methodology to streamline our data processes. Job Description You will work with our customers, partners, and other Guidewire team members to deliver successful data migration and data integration programs utilizing our custom migration tools. You will utilize best practices for design, development and delivery of customer projects. You will knowledge with the wider Guidewire Data and Analytics team. One of our principles is to have fun while we deliver, so this role will need to keep the delivery process fun and engaging for the team in collaboration with the broader organization. Given the dynamic nature of the work in the Data and Analytics team, we are looking for decisive, highly-skilled technical problem solvers who can bring their array of experience working in previous data streaming related roles. You will cooperate closely with teams located around the world. Key Responsibilities: Deliver data access projects for customers on time and effectively Work with the data team to improve processes and methodology Create new tooling to streamline data processing when called upon or when the opportunity presents itself Systematic problem-solving approach, coupled with a sense of ownership and drive Ability to work independently in a fast-paced Agile environment Qualifications: Bachelor’s or Master’s Degree in Computer Science, or equivalent level of demonstrable professional competency, and 5 years + in delivery type role Familiarity with functional programming concepts. Experience with the Scala programming language. Experience with Apache Spark for big data processing. Familiarity with data processing and ETL (Extract, Transform, Load) and ELT (Extract, Load, and Transform) concepts. Experience working with relational and/or NoSQL databases Experience working with different cloud platforms (such as AWS, Azure, Snowflake, Google Cloud, etc.) Experience developing and deploying production REST APIs. Experience working with customer teams to understand business objectives and functional requirements. Effective leadership, interpersonal, and communication skills. Ability to work independently and within a team. Flexibility to do shift work as needed (aligning to US colleagues/customers). Nice to have: Insurance industry experience Experience with the Guidewire Data Platform Interested in this position? About Guidewire Guidewire is the platform P&C insurers trust to engage, innovate, and grow efficiently. We combine digital, core, analytics, and AI to deliver our platform as a cloud service. More than 540+ insurers in 40 countries, from new ventures to the largest and most complex in the world, run on Guidewire. As a partner to our customers, we continually evolve to enable their success. We are proud of our unparalleled implementation track record with 1600+ successful projects, supported by the largest R&D team and partner ecosystem in the industry. Our Marketplace provides hundreds of applications that accelerate integration, localization, and innovation. For more information, please visit www.guidewire.com and follow us on Twitter: @Guidewire_PandC. Guidewire Software, Inc. is proud to be an equal opportunity and affirmative action employer. We are committed to an inclusive workplace, and believe that a diversity of perspectives, abilities, and cultures is a key to our success. Qualified applicants will receive consideration without regard to race, color, ancestry, religion, sex, national origin, citizenship, marital status, age, sexual orientation, gender identity, gender expression, veteran status, or disability. All offers are contingent upon passing a criminal history and other background checks where it's applicable to the position.
Posted 1 week ago
3.0 - 5.0 years
4 - 6 Lacs
Ahmedabad
On-site
About the Role: Grade Level (for internal use): 09 S&P Global Market Intelligence The Role: Software Developer II (.Net Backend Developer) Grade ( relevant for internal applicants only ): 9 The Location: Ahmedabad, Gurgaon, Hyderabad The Team: S&P Global Market Intelligence, a best-in-class sector-focused news and financial information provider, is looking for a Software Developer to join our Software Development team in our India offices. This is an opportunity to work on a self-managed team to maintain, update, and implement processes utilized by other teams. Coordinate with stakeholders to design innovative functionality in existing and future applications. Work across teams to enhance the flow of our data. What’s in it for you: This is the place to hone your existing skills while having the chance to be exposed to fresh and divergent technologies. Exposure to work on the latest, cutting-edge technologies in the full stack eco system. Opportunity to grow personally and professionally. Exposure in working on AWS Cloud solutions will be added advantage. Responsibilities: Identify, prioritize, and execute tasks in Agile software development environment. Develop solutions to develop/support key business needs. Engineer components and common services based on standard development models, languages and tools. Produce system design documents and participate actively in technical walkthroughs. Demonstrate a strong sense of ownership and responsibility with release goals. This includes understanding requirements, technical specifications, design, architecture, implementation, unit testing, builds/deployments, and code management. Build and maintain the environment for speed, accuracy, consistency and ‘up’ time. Collaborate with team members across the globe. Interface with users, business analysts, quality assurance testers and other teams as needed. What We’re Looking For: Basic Qualifications: Bachelor's/Master’s degree in computer science, Information Systems or equivalent. 3-5 years of experience. Solid experience with building processes; debugging, refactoring, and enhancing existing code, with an understanding of performance and scalability. Competency in C#, .NET, .NET CORE. Experience with DevOps practices and modern CI/CD deployment models using Jenkins Experience supporting production environments Knowledge of T-SQL and MS SQL Server Exposure to Python/scala/AWS technologies is a plus Exposure to React/Angular is a plus Preferred Qualifications: Exposure to DevOps practices and CI/CD pipelines such as Azure DevOps or GitHub Actions. Familiarity with automated unit testing is advantageous. Exposure in working on AWS Cloud solutions will be added to an advantage. About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence . What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 316163 Posted On: 2025-06-09 Location: Ahmedabad, Gujarat, India
Posted 1 week ago
0 years
0 Lacs
Thiruvananthapuram, Kerala, India
Remote
Brief Description The Cloud Data Engineer will play a critical implementation role on the Data Engineering and Data Products team and be responsible for data pipeline solutions design and development, troubleshooting, and optimization tuning on the next generation data and analytics platform being developed with leading edge big data technologies in a highly secure cloud infrastructure. The Cloud Data Engineer will serve as a liaison to platform user groups ensuring successful implementation of capabilities on the new platform. Data Engineer Responsibilities : Deliver end-to-end data and analytics capabilities, including data ingest, data transformation, data science, and data visualization in collaboration with Data and Analytics stakeholder groups Design and deploy databases and data pipelines to support analytics projects Develop scalable and fault-tolerant workflows Clearly document issues, solutions, findings and recommendations to be shared internally & externally Learn and apply tools and technologies proficiently, including: Languages: Python, PySpark, ANSI SQL, Python ML libraries Frameworks/Platform: Spark, Snowflake, Airflow, Hadoop , Kafka Cloud Computing: AWS Tools/Products: PyCharm, Jupyter, Tableau, PowerBI Performance optimization for queries and dashboards Develop and deliver clear, compelling briefings to internal and external stakeholders on findings, recommendations, and solutions Analyze client data & systems to determine whether requirements can be met Test and validate data pipelines, transformations, datasets, reports, and dashboards built by team Develop and communicate solutions architectures and present solutions to both business and technical stakeholders Provide end user support to other data engineers and analysts Candidate Requirements : Expert experience in the following[ Should have / Good to have ]: SQL, Python, PySpark, Python ML libraries. Other programming languages (R, Scala, SAS, Java, etc.) are a plus Data and analytics technologies including SQL/NoSQL/Graph databases, ETL, and BI Knowledge of CI/CD and related tools such as Gitlab, AWS CodeCommit etc. AWS services including EMR, Glue, Athena, Batch, Lambda CloudWatch, DynamoDB, EC2, CloudFormation, IAM and EDS Exposure to Snowflake and Airflow. Solid scripting skills (e.g., bash/shell scripts, Python) Proven work experience in the following: Data streaming technologies Big Data technologies including, Hadoop, Spark, Hive, Teradata, etc. Linux command-line operations Networking knowledge (OSI network layers, TCP/IP, virtualization) Candidate should be able to lead the team, communicate with business, gather and interpret business requirements Experience with agile delivery methodologies using Jira or similar tools Experience working with remote teams AWS Solutions Architect / Developer / Data Analytics Specialty certifications, Professional certification is a plus Bachelor Degree in Computer Science relevant field, Masters Degree is a plus Show more Show less
Posted 1 week ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description Job Description Are you interested in applying your strong quantitative analysis and big data skills to world-changing problems? Are you interested in driving the development of methods, models and systems for capacity planning, transportation and fulfillment network? If so, then this is the job for you. Our team is responsible for creating core analytics tech capabilities, platforms development and data engineering. We develop scalable analytics applications and research modeling to optimize operation processes. We standardize and optimize data sources and visualization efforts across geographies, builds up and maintains the online BI services and data mart. You will work with professional software development managers, data engineers, scientists, business intelligence engineers and product managers using rigorous quantitative approaches to ensure high quality data tech products for our customers around the world, including India, Australia, Brazil, Mexico, Singapore and Middle East. Amazon is growing rapidly and because we are driven by faster delivery to customers, a more efficient supply chain network, and lower cost of operations, our main focus is in the development of strategic models and automation tools fed by our massive amounts of available data. You will be responsible for building these models/tools that improve the economics of Amazon’s worldwide fulfillment networks in emerging countries as Amazon increases the speed and decreases the cost to deliver products to customers. You will identify and evaluate opportunities to reduce variable costs by improving fulfillment center processes, transportation operations and scheduling, and the execution to operational plans. You will also improve the efficiency of capital investment by helping the fulfillment centers to improve storage utilization and the effective use of automation. Finally, you will help create the metrics to quantify improvements to the fulfillment costs (e.g., transportation and labor costs) resulting from the application of these optimization models and tools. Major Responsibilities Include Translating business questions and concerns into specific analytical questions that can be answered with available data using BI tools; produce the required data when it is not available. Apply Statistical and Machine Learning methods to specific business problems and data. Create global standard metrics across regions and perform benchmark analysis. Ensure data quality throughout all stages of acquisition and processing, including such areas as data sourcing/collection, ground truth generation, normalization, transformation, cross-lingual alignment/mapping, etc. Communicate proposals and results in a clear manner backed by data and coupled with actionable conclusions to drive business decisions. Collaborate with colleagues from multidisciplinary science, engineering and business backgrounds. Develop efficient data querying and modeling infrastructure. Manage your own process. Prioritize and execute on high impact projects, triage external requests, and ensure to deliver projects in time. Utilizing code (Python, R, Scala, etc.) for analyzing data and building statistical models. Basic Qualifications 2+ years of data scientist experience 3+ years of data querying languages (e.g. SQL), scripting languages (e.g. Python) or statistical/mathematical software (e.g. R, SAS, Matlab, etc.) experience 3+ years of machine learning/statistical modeling data analysis tools and techniques, and parameters that affect their performance experience Experience applying theoretical models in an applied environment Preferred Qualifications Experience in Python, Perl, or another scripting language Experience in a ML or data scientist role with a large technology company Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ATSPL - Telangana Job ID: A3003398 Show more Show less
Posted 1 week ago
1.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description Amazon is a place where data drives most of our decision-making. Analytics, Operations & Programs (AOP) team is looking for a dynamic data engineer who can be innovative, strong problem solver and can lead the implementation of the analytical data infrastructure that will guide the decision making. As a Data Engineer, you think like an entrepreneur, constantly innovating and driving positive change, but more importantly, you consistently deliver mind-boggling results. You're a leader, who uses both quantitative and qualitative methods to get things done. And on top of it all, you're someone who wonders "What if?" and then seeks out the solution. This position offers exceptional opportunities to grow their technical and non-technical skills. You have the opportunity to really make a difference to our business by inventing, enhancing and building world class systems, delivering results, working on exciting and challenging projects. As a Data Engineer, you are responsible for analyzing large amounts of business data, solve real world problems, and develop metrics and business cases that will enable us to continually delight our customers worldwide. This is done by leveraging data from various platforms such as Jira, Portal, Salesforce. You will work with a team of Product Managers, Software Engineers and Business Intelligence Engineers to automate and scale the analysis, and to make the data more actionable to manage business at scale. You will own many large datasets, implement new data pipelines that feed into or from critical data systems at Amazon. You must be able to prioritize and work well in an environment with competing demands. Successful candidates will bring strong technical abilities combined with a passion for delivering results for customers, internal and external. This role requires a high degree of ownership and a drive to solve some of the most challenging data and analytic problems in retail. Candidates must have demonstrated ability to manage large-scale data modeling projects, identify requirements and tools, build data warehousing solutions that are explainable and scalable. In addition to the technical skills, a successful candidate will possess strong written and verbal communication skills and a high intellectual curiosity with ability to learn new concepts/frameworks and technology rapidly as changes arise. Key job responsibilities Design, implement and support an analytical data infrastructure Managing AWS resources including EC2, EMR, S3, Glue, Redshift, etc. Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL and AWS big data technologies Explore and learn the latest AWS technologies to provide new capabilities and increase efficiency Collaborate with Data Scientists and Business Intelligence Engineers (BIEs) to recognize and help adopt best practices in reporting and analysis Help continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers Maintain internal reporting platforms/tools including troubleshooting and development. Interact with internal users to establish and clarify requirements in order to develop report specifications. Work with Engineering partners to help shape and implement the development of BI infrastructure including Data Warehousing, reporting and analytics platforms. Contribute to the development of the BI tools, skills, culture and impact. Write advanced SQL queries and Python code to develop solutions A day in the life This role requires you to live at the intersection of data, software, and analytics. We leverage a comprehensive suite of AWS technologies, with key tools including S3, Redshift, DynamoDB, Lambda, API's, Glue. You will drive the development process from design to release. Managing data ingestion from heterogeneous data sources, with automated data quality checks. Creating scalable data models for effective data processing, storage, retrieval, and archiving. Using scripting for automation and tool development, which is scalable, reusable, and maintainable. Providing infrastructure for self serve analytics and science use cases. Using industry best practices in building CI/CD pipelines About The Team AOP (Analytics Operations and Programs) team is missioned to standardize BI and analytics capabilities, and reduce repeat analytics/reporting/BI workload for operations across IN, AU, BR, MX, SG, AE, EG, SA marketplace. AOP is responsible to provide visibility on operations performance and implement programs to improve network efficiency and defect reduction. The team has a diverse mix of strong engineers, Analysts and Scientists who champion customer obsession. We enable operations to make data-driven decisions through developing near real-time dashboards, self-serve dive-deep capabilities and building advanced analytics capabilities. We identify and implement data-driven metric improvement programs in collaboration (co-owning) with Operations teams Basic Qualifications 1+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) Experience with one or more scripting language (e.g., Python, KornShell) Preferred Qualifications Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ASSPL - Karnataka Job ID: A2904529 Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Scala is a popular programming language that is widely used in India, especially in the tech industry. Job seekers looking for opportunities in Scala can find a variety of roles across different cities in the country. In this article, we will dive into the Scala job market in India and provide valuable insights for job seekers.
These cities are known for their thriving tech ecosystem and have a high demand for Scala professionals.
The salary range for Scala professionals in India varies based on experience levels. Entry-level Scala developers can expect to earn around INR 6-8 lakhs per annum, while experienced professionals with 5+ years of experience can earn upwards of INR 15 lakhs per annum.
In the Scala job market, a typical career path may look like: - Junior Developer - Scala Developer - Senior Developer - Tech Lead
As professionals gain more experience and expertise in Scala, they can progress to higher roles with increased responsibilities.
In addition to Scala expertise, employers often look for candidates with the following skills: - Java - Spark - Akka - Play Framework - Functional programming concepts
Having a good understanding of these related skills can enhance a candidate's profile and increase their chances of landing a Scala job.
Here are 25 interview questions that you may encounter when applying for Scala roles:
As you explore Scala jobs in India, remember to showcase your expertise in Scala and related skills during interviews. Prepare well, stay confident, and you'll be on your way to a successful career in Scala. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.