Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 6.0 years
20 - 25 Lacs
Bengaluru
Hybrid
Join us as a Data Engineer II in Bengaluru! Build scalable data pipelines using Python, SQL, AWS, Airflow, and Kafka. Drive real-time & batch data systems across analytics, ML, and product teams. A hybrid work option is available. Required Candidate profile 3+ yrs in data engineering with strong Python, SQL, AWS, Airflow, Spark, Kafka, Debezium, Redshift, ETL & CDC experience. Must know data lakes, warehousing, and orchestration tools.
Posted 1 week ago
4.0 - 6.0 years
10 - 20 Lacs
Noida
Hybrid
Designation: Senior Software Engineer/ Software Engineer - Data Engineering Location: Noida Experience: 4 -6 years Job Summary/ Your Role in a Nutshell: The ideal candidate would be a skilled Data Engineer proficient in Python, Scala, or Java with a strong background in Hadoop, Spark, SQL, and various data platforms and have expertise in optimizing the performance of data applications and contributing to rapid and agile development processes. What youll do: Review and understand business requirements ensuring that development tasks are completed within the timeline provided and that issues are fully tested with minimal defects Partner with a software development team to implement best practices and optimize the performance of Data applications to ensure that client needs are met at all times Collaborate across the company and interact with our customers to understand, translate, define, design their business challenges and concerns into innovative solutions Research on new Big Data technologies, assessing maturity and alignment of technology to business and technology strategy Work in a rapid and agile development process to enable increased speed to market while maintaining appropriate controls What you need: BE/B.Tech/MCA with At least 4+ years of experience in design and development using Data Engineering technology stack and programming languages Mandatory experience in following areas: Python/Scala/Java Hadoop, HDFS, MR Spark SQL, Dataframes, RDDs SQL Hive / Snowflake/SQL Server/Bigquery Elastic Search Preferred experience in 3 or more of the following areas: Spark Streaming, Spark ML Kafka/Flume Apache NiFi Apache Airflow/Oozie Cloud-based Data Platforms NoSQL Databases HBase/Cassandra/Neo4j/MongoDB Good knowledge of the current technology landscape and ability to visualize industry trends Working knowledge of Big Data Integration with Third-party or in-house built Metadata Management, Data Quality, and Master Data Management solutions Active community involvement through articles, blogs, or speaking engagements at conferences
Posted 1 week ago
0.0 - 5.0 years
0 - 108 Lacs
Kolkata
Work from Office
We are a AI Company, led by Adithiyaa Tulshan, with 15 years of experience in AI. We are looking for associates who are hungry to adopt to the change due to AI and deliver value for clients fill in this form: https://forms.gle/BUcqTK3gBHARPcxv5 Flexi working Work from home Over time allowance Annual bonus Sales incentives Performance bonus Joining bonus Retention bonus Referral bonus Career break/sabbatical
Posted 1 week ago
4.0 - 9.0 years
20 - 25 Lacs
Bengaluru
Work from Office
We help the world run better At SAP, we enable you to bring out your best Our company culture is focused on collaboration and a shared passion to help the world run better HowWe focus every day on building the foundation for tomorrow and creating a workplace that embraces differences, values flexibility, and is aligned to our purpose-driven and future-focused work We offer a highly collaborative, caring team environment with a strong focus on learning and development, recognition for your individual contributions, and a variety of benefit options for you to choose from, Job Title: Engineering Expert Business Data Cloud Data Product Runtime Team Job Description As an Engineering Expert, you will be instrumental in the expansion of our Foundation Services team in Bangalore Your profound expertise in distributed data processing, Spark optimization, and data transformation pipelines will drive scalable data solutions, reinforcing SAP Business Data Cloud's pivotal role in SAP's Data & AI strategy, Responsibilities Lead the design and execution of optimized data processing solutions, Spearhead Spark optimization strategies to maximize performance and scalability in data processing, Architect and refine pluggable data transformation pipelines for efficient processing of large datasets, Implement GitOps practices to advance CI/CD pipelines and operational efficiency, Apply advanced SQL skills to refine data transformations across vast datasets, Integrate AI & ML technologies into high-performance data solutions, staying informed on emerging trends, Utilize SAP HANA Spark to drive innovation in data engineering processes, Mentor junior engineers, fostering a culture of continuous improvement and technical excellence, Collaborate effectively with global stakeholders to achieve successful project outcomes, Qualifications Extensive experience in data engineering, distributed data processing, and expertise in SAP HANA or similar databases, Proficiency in Python (PySpark) is essential; knowledge of Scala and Java is advantageous, Advanced understanding of Spark optimization and scalable data processing techniques, Proven experience in architecting data transformation pipelines, Knowledge of Kubernetes, GitOps, and modern cloud stacks, Strong understanding of AI & ML technologies and industry trends, Effective communication skills within a global, multi-cultural environment, Proven track record of leadership in data processing and platform initiatives, Bring out your best SAP innovations help more than four hundred thousand customers worldwide work together more efficiently and use business insight more effectively Originally known for leadership in enterprise resource planning (ERP) software, SAP has evolved to become a market leader in end-to-end business application software and related services for database, analytics, intelligent technologies, and experience management As a cloud company with two hundred million users and more than one hundred thousand employees worldwide, we are purpose-driven and future-focused, with a highly collaborative team ethic and commitment to personal development Whether connecting global industries, people, or platforms, we help ensure every challenge gets the solution it deserves At SAP, you can bring out your best, We win with inclusion SAPs culture of inclusion, focus on health and well-being, and flexible working models help ensure that everyone regardless of background feels included and can run at their best At SAP, we believe we are made stronger by the unique capabilities and qualities that each person brings to our company, and we invest in our employees to inspire confidence and help everyone realize their full potential We ultimately believe in unleashing all talent and creating a better and more equitable world, SAP is proud to be an equal opportunity workplace and is an affirmative action employer We are committed to the values of Equal Employment Opportunity and provide accessibility accommodations to applicants with physical and/or mental disabilities If you are interested in applying for employment with SAP and are in need of accommodation or special assistance to navigate our website or to complete your application, please send an e-mail with your request to Recruiting Operations Team: Careers@sap, For SAP employees: Only permanent roles are eligible for the SAP Employee Referral Program, according to the eligibility rules set in the SAP Referral Policy Specific conditions may apply for roles in Vocational Training, EOE AA M/F/Vet/Disability Qualified applicants will receive consideration for employment without regard to their age, race, religion, national origin, ethnicity, age, gender (including pregnancy, childbirth, et al), sexual orientation, gender identity or expression, protected veteran status, or disability, Successful candidates might be required to undergo a background verification with an external vendor, Requisition ID: 426931 | Work Area: Software-Design and Development | Expected Travel: 0 10% | Career Status: Professional | Employment Type: Regular Full Time | Additional Locations: ,
Posted 1 week ago
8.0 - 11.0 years
12 - 16 Lacs
Bengaluru
Work from Office
Job Description We are seeking an experienced and hands-on Data Engineering Lead to lead the design, development, and optimization of the data infrastructure and pipelines In this role, you will build robust, scalable, and secure data solutions and data products while working with a team, The ideal candidate combines deep technical hands-on knowledge with strong leadership skills and a passion for building high-performance data flows, Key Responsibilities Lead the architecture, development, and maintenance of data pipelines, data lakes, and data warehouse solutions, Collaborate with stakeholders to understand data needs and translate business requirements into technical solutions, Drive best practices in data modelling, ETL/ELT development, and data quality assurance, Ensure high availability, reliability, and performance of data systems, Own and enforce data governance, security, and compliance standards across data platforms, Collaborate with DevOps and platform teams to support CI/CD pipelines and infrastructure automation for data solutions, Evaluate and recommend new tools, technologies, and frameworks that improve data delivery and analytics capabilities, Monitor system performance, usage metrics, and proactively resolve data-related issues, Requirements Experience: 8+ years of experience in data engineering, Proven experience testing large-scale data platforms and cloud-based data systems, Hands-on experience with big data and distributed systems (e-g , Spark, Hadoop, Kafka), Proven track record of building and scaling data pipelines in cloud environments, Expertise in SQL, Python, and one or more data orchestration tools (e-g , Airflow, dbt), Building data flows in Snowflake Skills Strong understanding of data warehousing concepts (e-g , dimensional modelling, star/snowflake schema), Proficiency with modern data platforms, Familiarity with containerization and orchestration tools (Docker, Kubernetes), Excellent problem-solving, analytical, and communication skills, Ability to manage competing priorities and drive cross-functional initiatives, Preferred Qualifications Experience in batch and real time streaming data platforms, Experience with DevOps practices and CI/CD pipelines for data systems, Familiarity with data governance, lineage, and metadata management tools
Posted 1 week ago
2.0 - 5.0 years
4 - 7 Lacs
Bengaluru
Work from Office
We are seeking a talented and detail-oriented Data Developer to design, develop, and maintain data solutions that support analytics, reporting, and business decision-making You will play a key role in building and optimizing data pipelines, integrating disparate data sources, and ensuring high-quality data delivery across systems The ideal candidate has strong technical skills, a passion for data, and experience working in modern data environments, Key Responsibilities Develop, optimize, and maintain data pipelines and ETL/ELT processes to ingest, transform, and load data from various sources, Build and maintain data models, data warehouses, and data marts to support business intelligence and analytics needs, Collaborate with data analysts, engineers, and business users to understand data requirements and deliver fit-for-purpose solutions, Write efficient SQL queries and scripts for data manipulation, cleansing, and analysis, Ensure data quality, consistency, and integrity across platforms and applications, Support data integration efforts between internal systems and third-party platforms via APIs or file transfers, Participate in performance tuning and troubleshooting of data workflows and queries, Document data solutions, processes, and data definitions to support data governance and maintain transparency, Stay updated with industry trends and recommend improvements to the existing data architecture and practices, Requirements Experience: 3+ years of experience in ETL development, data engineering, or a similar role, Hands-on experience with ETL tools and frameworks (e-g , Apache NiFi, Snowflake etc), Strong SQL skills and familiarity with scripting languages such as Python or Shell, Experience working with relational databases (e-g, SQL Server, PostgreSQL, MySQL) and data warehouses, Familiarity with version control systems (e-g, Git) and CI/CD pipelines, Skills Solid understanding of data modelling concepts (e-g , star/snowflake schema, normalization), Good knowledge of cloud platforms and their data services, Strong analytical thinking and problem-solving skills, Effective communication skills and the ability to work collaboratively in a team environment, Preferred Qualifications Experience with big data tools and platforms (e-g, Spark, Hadoop, Databricks), Familiarity with data orchestration tools like Apache Airflow, Understanding of data governance, security, and compliance principles, Background in agile software development and DevOps practices check(event) ;
Posted 1 week ago
5.0 - 10.0 years
14 - 19 Lacs
Hyderabad
Work from Office
About Neudesic Passion for technology drives us, but its innovation that defines us From design to development and support to management, Neudesic offers decades of experience, proven frameworks, and a disciplined approach to quickly deliver reliable, quality solutions that help our customers go to market faster, What sets us apart from the rest is an amazing collection of people who live and lead with our core values We believe that everyone should be Passionate about what they do, Disciplined to the core, Innovative by nature, committed to a Team and conduct themselves with Integrity If these attributes mean something to you we'd like to hear from you, We are currently looking for Power BI Developer to become a member of Neudesics Data & AI team, Experience : 5+yrs and above only Technical Skills 5+ years with PowerBI, DAX Power BI Developer/ Reporting Event hubs + Spark Stream Working with XML and file-based data processing Experience with Microsoft Azure data analytics tools, such as Azure Data Factory, Design and Developing Data strategy/ hub and spoke / data lake zones/ standards & patterns Strong Data Engineering and Data Platform, Implementation experience on Azure, Data strategy/hub and spoke / data lake zones/standards and patterns, etc Business Skills Analytic Problem-Solving: Approaching high-level challenges with a clear eye on what is important; employing the right approach/methods to make the maximum use of time and human resources, Effective Communication: Detailing your techniques and discoveries to technical and non-technical audiences in a language they can understand, Intellectual Curiosity: Exploring new territories and finding creative and unusual ways to solve problems, Data Analysis Knowledge: Understanding how data is collected, analyzed, and utilized, Be aware of phishing scams involving fraudulent career recruiting and fictitious job postings; visit our Phishing Scams page to learn more, Neudesic is an Equal Opportunity Employer Neudesic provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by local laws, Neudesic is an IBM subsidiary which has been acquired by IBM and will be integrated into the IBM organization Neudesic will be the hiring entity By proceeding with this application, you understand that Neudesic will share your personal information with other IBM companies involved in your recruitment process, wherever these are located More Information on how IBM protects your personal information, including the safeguards in case of cross-border data transfer, are available here: https: / / ibm , / us-en / privacylnk=flg-priv-usen
Posted 1 week ago
2.0 - 5.0 years
15 - 20 Lacs
Gurugram
Work from Office
About us Bain & Company is a global management consulting that helps the worlds most ambitious change makers define the future Across 65 offices in 40 countries, we work alongside our clients as one team with a shared ambition to achieve extraordinary results, outperform the competition and redefine industries Since our founding in 1973, we have measured our success by the success of our clients, and we proudly maintain the highest level of client advocacy in the industry, In 2004, the firm established its presence in the Indian market by opening the Bain Capability Center (BCC) in New Delhi The BCC is now known as BCN (Bain Capability Network) with its nodes across various geographies BCN is an integral and largest unit of (ECD) Expert Client Delivery ECD plays a critical role as it adds value to Bain's case teams globally by supporting them with analytics and research solutioning across all industries, specific domains for corporate cases, client development, private equity diligence or Bain intellectual property The BCN comprises of Consulting Services, Knowledge Services and Shared Services, Who you will work with Bain & Company is the leading consulting partner to the private equity industry and its stakeholders PEG (Private Equity Group) in Bain provides comprehensive advisory services to investors across the entire investment lifecycle PEG expertise spans from pre-investment strategy and due diligence to post-acquisition value creation and portfolio management, PIT is a specialized division dedicated to supporting the private equity sector and its stakeholders A key area of focus is harnessing Generative AI to develop innovative solutions that streamline due diligence, automate processes, and enable data-driven decision-making By integrating expertise in due diligence, operational improvement, and cutting-edge technologies like Generative AI, PIT empowers private equity clients to achieve superior returns and maintain a competitive edge in the market, What youll do This is an opportunity to be a part of BCN Data business expanding science capability area This position will be part of the PIT (PEG Innovation Team) Product Support: The team will primarily support the Beta production of AI applications for PEG (Private Equity Group) as part of Bains DD2030 (Due Diligence 2030) initiative, Generative AI Focus: A significant portion of the work is expected to relate to Generative AI applications, pushing the boundaries of innovation in the private equity space, Broader Automation: The role may also include contributing to broader PIT automation initiatives aimed at streamlining processes and enhancing efficiency across various investment lifecycle stages, The person in this role will need to: Translate business objectives into data and analytics solutions and, translate results into business insights using appropriate data engineering, analytics, visualization & Gen AI applications Leverage Gen AI skills to design and create repeatable analytical solutions to improve data quality Design, build, and deploy machine learning models using Scikit-Learn for various predictive analytics tasks, Implement and fine-tune NLP models with Hugging Face to address complex language processing challenges Collaborate with engineering team members on design requirements to turn PoC methods into repeatable data pipelines; work with Practice team to develop repeatable and scalable products Assist with creation and documentation of standard operating procedures for repeated data processes, as well as knowledge base of data methods Keep abreast of new developments in AI/ML technologies and best practices in data science, particularly in LLMs and generative AI, About you A Bachelors or Masters degree in in Computer Science, Artificial Intelligence, Applied Mathematics, Econometrics, Statistics, Physics, Market Research, or related field is preferred 3-5 years of experience with data science & data engineering with Hands-on experience with AI/GenAI tools such as Scikit-Learn, LangChain, and Hugging Face Experience designing and developing RESTful and GraphQL APIs to facilitate data access and integration, Proficiency with data wrangling in either R or Python is required Proficiency in SQL is required, and familiarity with NoSQL data stores is a plus Familiarity with MLOps practices for model lifecycle management Experience with Git and modern software development workflow is a plus Experience with containerization such as Docker/Kubernetes is a plus Agile way of working and tools (Jira, Confluence, Miro) Strong interpersonal and communication skills are a must, Experience with Micrsoft Office suite (Word, Excel, PowerPoint, Teams) is preferred Ability to explain and discuss Gen AI and Data Engineering technicalities to a business audience, What makes us a great place to work We are proud to be consistently recognized as one of the world's best places to work, a champion of diversity and a model of social responsibility We are currently ranked the #1 consulting firm on Glassdoors Best Places to Work list, and we have maintained a spot in the top four on Glassdoor's list for the last 12 years We believe that diversity, inclusion and collaboration is key to building extraordinary teams We hire people with exceptional talents, abilities and potential, then create an environment where you can become the best version of yourself and thrive both professionally and personally We are publicly recognized by external parties such as Fortune, Vault, Mogul, Working Mother, Glassdoor and the Human Rights Campaign for being a great place to work for diversity and inclusion, women, LGBTQ and parents,
Posted 1 week ago
1.0 - 5.0 years
15 - 20 Lacs
Gurugram
Work from Office
Position Summary The Senior Analyst, Data & Marketing Analytics, will be a key contributor in building the foundation of Bains marketing analytics ecosystem This role offers a meaningful opportunity for an experienced analyst to take on broader responsibilities and contribute to strategic outcomes, Responsible for playing a hands-on role in designing data infrastructure, delivering insights, and enabling scalable reporting ? all while working closely with marketing, digital, and technology stakeholders From building data pipelines and dashboards to running agile projects and leading high-stakes discussions, this is an opportunity to shape how analytics powers strategic marketing at Bain, One will thrive in an agile, fast-paced environment and collaborate closely with stakeholders across the marketing and analytics ecosystem, Responsibilities: Data Analytics & Insight Generation (30%) Analyze marketing, digital, and campaign data to uncover patterns and deliver actionable insights, Support performance measurement, experimentation, and strategic decision-making across the marketing funnel, Translate business questions into structured analyses and data-driven narratives, Data Infrastructure & Engineering (30%) Design and maintain scalable data pipelines and workflows using SQL, Python, and Databricks, Build and evolve a marketing data lake, integrating APIs and data from multiple platforms and tools, Work across cloud environments (Azure, AWS) to support analytics-ready data at scale, Project & Delivery Ownership (25%) Serve as project lead or scrum owner across analytics initiatives ? planning sprints, managing delivery, and driving alignment, Use tools like JIRA to manage work in an agile environment and ensure timely execution, Collaborate with cross-functional teams to align priorities and execute on roadmap initiatives, Visualization & Platform Enablement (15%) Build high-impact dashboards and data products using Tableau, with a focus on usability, scalability, and performance, Enable stakeholder self-service through clean data architecture and visualization best practices, Experiment with emerging tools and capabilities, including GenAI for assisted analytics, Experience 5+ years of experience in data analytics, digital analytics, or data engineering, ideally in a marketing or commercial context, Hands-on experience with SQL, Python, and tools such as Databricks, Azure, or AWS, Proven track record of building and managing data lakes, ETL pipelines, and API integrations, Strong proficiency in Tableau; experience with Tableau Prep is a plus, Familiarity with Google Analytics (GA4), GTM, and social media analytics platforms, Experience working in agile teams, with comfort using JIRA for sprint planning and delivery, Exposure to predictive analytics, modeling, and GenAI applications is a plus, Strong communication and storytelling skills ? able to lead high-stakes meetings and deliver clear insights to senior stakeholders, Excellent organizational and project management skills; confident in managing competing priorities, High attention to detail, ownership mindset, and a collaborative, delivery-focused approach,
Posted 1 week ago
2.0 - 5.0 years
4 - 7 Lacs
Chennai
Work from Office
The ideal candidate will have a strong background in data engineering and excellent problem-solving skills. Roles and Responsibility Design and develop large-scale data pipelines and architectures to support business intelligence and analytics. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain complex data models and databases to ensure data integrity and consistency. Implement data quality checks and validation processes to ensure accuracy and reliability. Optimize data processing workflows to improve performance and efficiency. Troubleshoot and resolve technical issues related to data engineering projects. Job Requirements Strong understanding of data engineering principles and practices. Experience with data modeling, database design, and development. Proficiency in programming languages such as Python or Java. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a team environment. Strong communication and interpersonal skills. Familiarity with industry-standard tools and technologies used in data engineering. Educational qualification: Any Graduate.
Posted 1 week ago
7.0 - 12.0 years
8 - 13 Lacs
Chennai
Work from Office
Overview We are ooking for a highy skied Lead Engineer to spearhead our data and appication migration projects. The idea candidate wi have in-depth knowedge of coud migration strategies, especiay with AWS, and hands-on experience in arge-scae migration initiatives. This roe requires strong eadership abiities, technica expertise, and a keen understanding of both the source and target patforms. Responsibiities Lead end-to-end migration projects, incuding panning, design, testing, and impementation. Coaborate with stakehoders to define migration requirements and goas. Perform assessments of existing environments to identify the scope and compexity of migration tasks. Design and architect scaabe migration strategies, ensuring minima downtime and business continuity. Oversee the migration of on-premises appications, databases, and data warehouses to coud infrastructure. Ensure the security, performance, and reiabiity of migrated workoads. Provide technica eadership and guidance to the migration team, ensuring adherence to best practices. Troubeshoot and resove any technica chaenges reated to the migration process. Coaborate with cross-functiona teams, incuding infrastructure, deveopment, and security. Document migration procedures and essons earned for future reference. Lead end-to-end migration projects, incuding panning, design, testing, and impementation. Coaborate with stakehoders to define migration requirements and goas. Perform assessments of existing environments to identify the scope and compexity of migration tasks. Design and architect scaabe migration strategies, ensuring minima downtime and business continuity. Oversee the migration of on-premises appications, databases, and data warehouses to coud infrastructure. Ensure the security, performance, and reiabiity of migrated workoads. Provide technica eadership and guidance to the migration team, ensuring adherence to best practices. Troubeshoot and resove any technica chaenges reated to the migration process. Coaborate with cross-functiona teams, incuding infrastructure, deveopment, and security. Document migration procedures and essons earned for future reference.
Posted 1 week ago
5.0 - 10.0 years
9 - 13 Lacs
Gurugram
Work from Office
Date 18 Jun 2025 Location: Gurgaon, HR, IN Company Astom At Astom, we understand transport networks and what moves peope. From high-speed trains, metros, monorais, and trams, to turnkey systems, services, infrastructure, signaing, and digita mobiity, we offer our diverse customers the broadest portfoio in the industry. Every day, 80,000 coeagues ead the way to greener and smarter mobiity wordwide, connecting cities as we reduce carbon and repace cars. Your future roe Take on a new chaenge and appy your microservices deveopment expertise in a cutting-edge fied. You work aongside innovative and coaborative teammates. You' pay a pivota roe in shaping the future of our digita architecture by designing and impementing scaabe microservices soutions. Day-to-day, you work cosey with deveopment teams, system architects, and stakehoders across the business (e.g., engineering, product management), troubeshoot and optimize system performance, and much more. You specificay take care of migrating egacy services to a container-based microservices architecture, but aso contribute to the continuous improvement of deveopment processes and best practices. We ook to you for: Coaborating with deveopment teams to migrate egacy services to a container-based microservices architecture Designing, deveoping, and refactoring microservices everaging Kubernetes, Istio, and ingresses Utiizing message queues such as RabbitMQ or Kafka for integration between services and IoT devices Designing and deveoping we-structured, performant APIs and databases for microservices Proposing and impementing software and system architectures and best practices Staying up-to-date with new technoogies and contributing to the continuous improvement of architecture and deveopment processes Supporting appication performance tuning, troubeshooting, and system monitoring toos Contributing to program pans, timeines, and estimates whie effectivey communicating with stakehoders A about you We vaue passion and attitude over experience. Thats why we dont expect you to have every singe ski. Instead, weve isted some that we think wi hep you succeed and grow in this roe: Bacheors or Masters degree in Computer Science, Information Systems, or a reated engineering fied 6 to 9 years of experience in IT and/or digita companies deveoping microservices and migrating egacy services Outstanding technica eadership with hands-on experience in deveoping high-performing, scaabe microservices Exceent understanding of Python and any REST API framework ike Django, Fask, FastAPI, Spring Boot, or .NET Expertise in designing, anayzing, and maintaining arge-scae distributed systems Deep understanding of Agie methodoogies, CI/CD, testing, and code quaity standards Proficiency in containerization technoogies such as Kubernetes, Istio, and ingress Strong experience with message queues such as RabbitMQ or Kafka Knowedge of databases, incuding SQL and NoSQL, such as Easticsearch and PostgreSQL Experience with Azure coud provisioning and depoyment Famiiarity with coud technoogies, service modes, and depoyment modes Experience working with data engineering or data science teams is a pus Demonstrated teamwork and coaboration in a professiona setting Passion for staying current with new technoogies and making recommendations for adoption Things you enjoy Join us on a ife-ong transformative journey the rai industry is here to stay, so you can grow and deveop new skis and experiences throughout your career. You aso: Enjoy stabiity, chaenges, and a ong-term career free from boring daiy routines Work with cutting-edge security standards for rai signaing Coaborate with transverse teams and supportive coeagues Contribute to innovative projects that make a difference Utiise our fexibe and incusive working environment Steer your career in whatever direction you choose across functions and countries Benefit from our investment in your deveopment through award-winning earning programs Progress towards eadership or technica expert roes Benefit from a fair and dynamic reward package that recognises your performance and potentia, pus comprehensive and competitive socia coverage (ife, medica, pension) You dont need to be a train enthusiast to thrive with us. We guarantee that when you step onto one of our trains with your friends or famiy, you be proud. If youre up for the chaenge, wed ove to hear from you! Important to note As a goba business, were an equa-opportunity empoyer that ceebrates diversity across the 63 countries we operate in. Were committed to creating an incusive workpace for everyone.
Posted 1 week ago
5.0 - 10.0 years
7 - 12 Lacs
Bengaluru
Work from Office
In the IBM Chief Information Office,you wi be part of a dynamic team driving the future of AI and data science in arge-scae enterprise transformations. We offer a coaborative environment where your technica expertise wi be vaued, and your professiona deveopment wi be supported. Join us to work on chaenging projects, everage the atest technoogies, and make a tangibe impact on eading organisations. As a Data Scientist within IBM's Chief Information Office, you wi support AI-driven projects across the enterprise. You wi appy your technica skis in AI, machine earning, and data anaytics to assist in impementing data-driven soutions that aign with business goas. This roe invoves working with team members to transate data insights into actionabe recommendations. Key Responsibiities: Technica Execution and Leadership: Deveop and depoy AI modes and data anaytics soutions. Support the impementation and optimisation of AI-driven strategies per business stakehoder requirements. Hep refine data-driven methodoogies for transformation projects. Data Science and AI: Design and impement machine earning soutions and statistica modes, from probem formuation through depoyment, to anayse compex datasets and generate actionabe insights. Learn and utiise coud patforms to ensure the scaabiity of AI soutions. Leverage reusabe assets and appy IBM standards for data science and deveopment. Project Support: Lead and contribute to various stages of AI and data science projects, from data exporation to mode deveopment. Monitor project timeines and hep resove technica chaenges. Design and impement measurement frameworks to benchmark AI soutions, quantifying business impact through KPIs. Coaboration: Ensure aignment to stakehoders strategic direction and tactica needs. Work with data engineers, software deveopers, and other team members to integrate AI soutions into existing systems. Contribute technica expertise to cross-functiona teams. Required education Bacheor's Degree Preferred education Master's Degree Required technica and professiona expertise Bacheors or Masters in Computer Science, Data Science, Statistics, or a reated fied is required; an advanced degree is strongy preferred Experience: 5+ yearsof experience in data science, AI, or anaytics with a focus on impementing data-driven soutions Experience with data ceaning, data anaysis, A/B testing, and data visuaization Experience with AI technoogies through coursework or projects Technica Skis: Proficiency in SQL and Python for performing data anaysis and deveoping machine earning modes Knowedge of common machine earning agorithms and frameworksinear regression, decision trees, random forests, gradient boosting (e.g., XGBoost, LightGBM), neura networks, and deep earning frameworks such as TensorFow and PyTorch Experience with coud-based patforms and data processing frameworks Understanding of arge anguage modes (LLMs) Famiiarity with IBMs watsonx product suite Famiiarity with object-oriented programming Anaytica Skis: Strong probem-soving abiities and eagerness to earn
Posted 1 week ago
6.0 - 7.0 years
14 - 17 Lacs
Hyderabad
Work from Office
As Data Engineer, you wi deveop, maintain, evauate and test big data soutions. You wi be invoved in the deveopment of data soutions using Spark Framework with Python or Scaa on Hadoop and Azure Coud Data Patform Responsibiities: Experienced in buiding data pipeines to Ingest, process, and transform data from fies, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Coud Data Patform or HDFS Experienced in deveop efficient software code for mutipe use cases everaging Spark Framework / using Python or Scaa and Big Data technoogies for various use cases buit on the patform Experience in deveoping streaming pipeines Experience to work with Hadoop / Azure eco system components to impement scaabe soutions to meet the ever-increasing data voumes, using big data/coud technoogies Apache Spark, Kafka, any Coud computing etc Required education Bacheor's Degree Preferred education Master's Degree Required technica and professiona expertise Tota 6 - 7+ years of experience in Data Management (DW, DL, Data Patform, Lakehouse) and Data Engineering skis Minimum 4+ years of experience in Big Data technoogies with extensive data engineering experience in Spark / Python or Scaa; Minimum 3 years of experience on Coud Data Patforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to exceent SQL skis Preferred technica and professiona experience Certification in Azure and Data Bricks or Coudera Spark Certified deveopers
Posted 1 week ago
3.0 - 6.0 years
14 - 18 Lacs
Mysuru
Work from Office
As Data Engineer, you wi deveop, maintain, evauate and test big data soutions. You wi be invoved in the deveopment of data soutions using Spark Framework with Python or Scaa on Hadoop and Azure Coud Data Patform Required education Bacheor's Degree Preferred education Master's Degree Required technica and professiona expertise Strong and proven background in Information Technoogy & working knowedge of .NET Core, C#, REST API, LINQ, Entity Framework, XUnit. Troubeshooting issues reated to code performance. Working knowedge of Anguar 15 or ater, Typescript, Jest Framework, HTML 5 and CSS 3 & MS SQL Databases, troubeshooting issues reated to DB performance Good understanding of CQRS, mediator, repository pattern. Good understanding of CI/CD pipeines and SonarQube & messaging and reverse proxy Preferred technica and professiona experience Good understanding of AuthN and AuthZ techniques ike (windows, basic, JWT). Good understanding of GIT and it’s process ike Pu request. Merge, pu, commit Methodoogy skis ike AGILE, TDD, UML
Posted 1 week ago
4.0 - 9.0 years
12 - 16 Lacs
Kochi
Work from Office
As Data Engineer, you wi deveop, maintain, evauate and test big data soutions. You wi be invoved in the deveopment of data soutions using Spark Framework with Python or Scaa on Hadoop and Azure Coud Data Patform Responsibiities: Experienced in buiding data pipeines to Ingest, process, and transform data from fies, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Coud Data Patform or HDFS Experienced in deveop efficient software code for mutipe use cases everaging Spark Framework / using Python or Scaa and Big Data technoogies for various use cases buit on the patform Experience in deveoping streaming pipeines Experience to work with Hadoop / Azure eco system components to impement scaabe soutions to meet the ever-increasing data voumes, using big data/coud technoogies Apache Spark, Kafka, any Coud computing etc Required education Bacheor's Degree Preferred education Master's Degree Required technica and professiona expertise Minimum 4+ years of experience in Big Data technoogies with extensive data engineering experience in Spark / Python or Scaa; Minimum 3 years of experience on Coud Data Patforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to exceent SQL skis Exposure to streaming soutions and message brokers ike Kafka technoogies Preferred technica and professiona experience Certification in Azure and Data Bricks or Coudera Spark Certified deveopers
Posted 1 week ago
2.0 - 5.0 years
6 - 10 Lacs
Pune
Work from Office
As a Big Data Engineer, you wi deveop, maintain, evauate, and test big data soutions. You wi be invoved in data engineering activities ike creating pipeines/workfows for Source to Target and impementing soutions that tacke the cients needs. Your primary responsibiities incude: Design, buid, optimize and support new and existing data modes and ETL processes based on our cients business requirements. Buid, depoy and manage data infrastructure that can adequatey hande the needs of a rapidy growing data driven organization. Coordinate data access and security to enabe data scientists and anaysts to easiy access to data whenever they need too. Required education Bacheor's Degree Preferred education Master's Degree Required technica and professiona expertise Design, deveop, and maintain Ab Initio graphs for extracting, transforming, and oading (ETL) data from diverse sources to various target systems. Impement data quaity and vaidation processes within Ab Initio. Data Modeing and Anaysis:. Coaborate with data architects and business anaysts to understand data requirements and transate them into effective ETL processes.. Anayze and mode data to ensure optima ETL design and performance.. Ab Initio Components:. . Utiize Ab Initio components such as Transform Functions, Roup, Join, Normaize, and others to buid scaabe and efficient data integration soutions. Impement best practices for reusabe Ab Initio component Preferred technica and professiona experience Optimize Ab Initio graphs for performance, ensuring efficient data processing and minima resource utiization. Conduct performance tuning and troubeshooting as needed. Coaboration. Work cosey with cross-functiona teams, incuding data anaysts, database administrators, and quaity assurance, to ensure seamess integration of ETL processes.. Participate in design reviews and provide technica expertise to enhance overa soution quaity Documentation
Posted 1 week ago
2.0 - 5.0 years
14 - 17 Lacs
Mumbai
Work from Office
As a Data Engineer at IBM, you' pay a vita roe in the deveopment, design of appication, provide reguar support/guidance to project teams on compex coding, issue resoution and execution. Your primary responsibiities incude: Lead the design and construction of new soutions using the atest technoogies, aways ooking to add business vaue and meet user requirements. Strive for continuous improvements by testing the buid soution and working under an agie framework. Discover and impement the atest technoogies trends to maximize and buid creative soutions Required education Bacheor's Degree Preferred education Master's Degree Required technica and professiona expertise Experience with Apache Spark (PySpark)In-depth knowedge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data TechnoogiesFamiiarity with Hadoop, HDFS, Kafka, and other big data toos. Data Engineering Skis: Strong understanding of ETL pipeines, data modeing, and data warehousing concepts. Strong proficiency in PythonExpertise in Python programming with a focus on data processing and manipuation. Data Processing FrameworksKnowedge of data processing ibraries such as Pandas, NumPy. SQL ProficiencyExperience writing optimized SQL queries for arge-scae data anaysis and transformation. Coud PatformsExperience working with coud patforms ike AWS, Azure, or GCP, incuding using coud storage systems Preferred technica and professiona experience Define, drive, and impement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technoogy teams incuding appication deveopment, enterprise architecture, testing services, network engineering, Good to have detection and prevention toos for Company products and Patform and customer-facing
Posted 1 week ago
5.0 - 10.0 years
14 - 17 Lacs
Navi Mumbai
Work from Office
As a Big Data Engineer, you wi deveop, maintain, evauate, and test big data soutions. You wi be invoved in data engineering activities ike creating pipeines/workfows for Source to Target and impementing soutions that tacke the cients needs. Your primary responsibiities incude: Design, buid, optimize and support new and existing data modes and ETL processes based on our cients business requirements. Buid, depoy and manage data infrastructure that can adequatey hande the needs of a rapidy growing data driven organization. Coordinate data access and security to enabe data scientists and anaysts to easiy access to data whenever they need too Required education Bacheor's Degree Preferred education Master's Degree Required technica and professiona expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scaa ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Deveoped Python and pyspark programs for data anaysis. Good working experience with python to deveop Custom Framework for generating of rues (just ike rues engine). Deveoped Python code to gather the data from HBase and designs the soution to impement using Pyspark. Apache Spark DataFrames/RDD's were used to appy business transformations and utiized Hive Context objects to perform read/write operations Preferred technica and professiona experience Understanding of Devops. Experience in buiding scaabe end-to-end data ingestion and processing soutions Experience with object-oriented and/or functiona programming anguages, such as Python, Java and Scaa
Posted 1 week ago
3.0 - 6.0 years
14 - 18 Lacs
Bengaluru
Work from Office
As Data Engineer, you wi deveop, maintain, evauate and test big data soutions. You wi be invoved in the deveopment of data soutions using Spark Framework with Python or Scaa on Hadoop and Azure Coud Data Patform Required education Bacheor's Degree Preferred education Master's Degree Required technica and professiona expertise Strong and proven background in Information Technoogy & working knowedge of .NET Core, C#, REST API, LINQ, Entity Framework, XUnit. Troubeshooting issues reated to code performance. Working knowedge of Anguar 15 or ater, Typescript, Jest Framework, HTML 5 and CSS 3 & MS SQL Databases, troubeshooting issues reated to DB performance Good understanding of CQRS, mediator, repository pattern. Good understanding of CI/CD pipeines and SonarQube & messaging and reverse proxy Preferred technica and professiona experience Good understanding of AuthN and AuthZ techniques ike (windows, basic, JWT). Good understanding of GIT and it’s process ike Pu request. Merge, pu, commit Methodoogy skis ike AGILE, TDD, UML
Posted 1 week ago
2.0 - 5.0 years
14 - 17 Lacs
Hyderabad
Work from Office
As a Data Engineer at IBM, you' pay a vita roe in the deveopment, design of appication, provide reguar support/guidance to project teams on compex coding, issue resoution and execution. Your primary responsibiities incude: Lead the design and construction of new soutions using the atest technoogies, aways ooking to add business vaue and meet user requirements. Strive for continuous improvements by testing the buid soution and working under an agie framework. Discover and impement the atest technoogies trends to maximize and buid creative soutions Required education Bacheor's Degree Preferred education Master's Degree Required technica and professiona expertise Experience with Apache Spark (PySpark)In-depth knowedge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data TechnoogiesFamiiarity with Hadoop, HDFS, Kafka, and other big data toos. Data Engineering Skis: Strong understanding of ETL pipeines, data modeing, and data warehousing concepts. Strong proficiency in PythonExpertise in Python programming with a focus on data processing and manipuation. Data Processing FrameworksKnowedge of data processing ibraries such as Pandas, NumPy. SQL ProficiencyExperience writing optimized SQL queries for arge-scae data anaysis and transformation. Coud PatformsExperience working with coud patforms ike AWS, Azure, or GCP, incuding using coud storage systems Preferred technica and professiona experience Define, drive, and impement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technoogy teams incuding appication deveopment, enterprise architecture, testing services, network engineering, Good to have detection and prevention toos for Company products and Patform and customer-facing
Posted 1 week ago
4.0 - 9.0 years
6 - 10 Lacs
Pune
Work from Office
As an Data Engineer at IBM you wi harness the power of data to unvei captivating stories and intricate patterns. You' contribute to data gathering, storage, and both batch and rea-time processing. Coaborating cosey with diverse teams, you' pay an important roe in deciding the most suitabe data management systems and identifying the crucia data required for insightfu anaysis. As a Data Engineer, you' tacke obstaces reated to database integration and untange compex, unstructured data sets. In this roe, your responsibiities may incude: Impementing and vaidating predictive modes as we as creating and maintain statistica modes with a focus on big data, incorporating a variety of statistica and machine earning techniques Designing and impementing various enterprise search appications such as Easticsearch and Spunk for cient requirements Work in an Agie, coaborative environment, partnering with other scientists, engineers, consutants and database administrators of a backgrounds and discipines to bring anaytica rigor and statistica methods to the chaenges of predicting behaviours. Buid teams or writing programs to ceanse and integrate data in an efficient and reusabe manner, deveoping predictive or prescriptive modes, and evauating modeing resuts Required education Bacheor's Degree Preferred education Master's Degree Required technica and professiona expertise 4+ years of experience in data modeing, data architecture. Proficiency in data modeing toos Erwin, IBM Infosphere Data Architect and database management systems Famiiarity with different data modes ike reationa, dimensiona and NoSQL databases. Understanding of business processes and how data supports business decision making. Strong understanding of database design principes, data warehousing concepts, and data governance practices Preferred technica and professiona experience Exceent anaytica and probem-soving skis with a keen attention to detai. Abiity to work coaborativey in a team environment and manage mutipe projects simutaneousy. Knowedge of programming anguages such as SQL
Posted 1 week ago
5.0 - 10.0 years
14 - 17 Lacs
Mumbai
Work from Office
As a Big Data Engineer, you wi deveop, maintain, evauate, and test big data soutions. You wi be invoved in data engineering activities ike creating pipeines/workfows for Source to Target and impementing soutions that tacke the cients needs. Your primary responsibiities incude: Design, buid, optimize and support new and existing data modes and ETL processes based on our cients business requirements. Buid, depoy and manage data infrastructure that can adequatey hande the needs of a rapidy growing data driven organization. Coordinate data access and security to enabe data scientists and anaysts to easiy access to data whenever they need too. Required education Bacheor's Degree Preferred education Master's Degree Required technica and professiona expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scaa ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Deveoped Python and pyspark programs for data anaysis. Good working experience with python to deveop Custom Framework for generating of rues (just ike rues engine). Deveoped Python code to gather the data from HBase and designs the soution to impement using Pyspark. Apache Spark DataFrames/RDD's were used to appy business transformations and utiized Hive Context objects to perform read/write operations. Preferred technica and professiona experience Understanding of Devops. Experience in buiding scaabe end-to-end data ingestion and processing soutions Experience with object-oriented and/or functiona programming anguages, such as Python, Java and Scaa
Posted 1 week ago
2.0 - 6.0 years
12 - 16 Lacs
Bengaluru
Work from Office
Design, deveop, and manage our data infrastructure on AWS, with a focus on data warehousing soutions. Write efficient, compex SQL queries for data extraction, transformation, and oading. Utiize DBT for data modeing and transformation. Use Python for data engineering tasks, demonstrating strong work experience in this area. Impement scheduing toos ike Airfow, Contro M, or she scripting to automate data processes and workfows. Participate in an Agie environment, adapting quicky to changing priorities and requirements Required education Bacheor's Degree Preferred education Master's Degree Required technica and professiona expertise Mandatory Skis: Candidate shoud have worked on traditiona Data warehousing with any database (Orace or DB2 or SQL Server) (Redshift optiona) Candidate shoud have string SQL skis and abiity to write compex queries using anaytica functions. Prior working experience on AWS patform Python programming experience for data engineering .Experience in PySpark/Spark Working knowedge of Data Pipeines too Airfow The beow skis are nice to haveExperience with DBT, Exposure to working in an Agie environment. Proven abiity to troubeshoot and resove production issues under a DevOps mode A track record of continuousy identify opportunities to improve the performance and quaity of your ecosystem. Experience monitoring performance and ensuring Preferred technica and professiona experience Knowedge of DBT for data modeing and transformation is a pus. Experience with PySpark or Spark is highy desirabe
Posted 1 week ago
4.0 - 9.0 years
12 - 16 Lacs
Kochi
Work from Office
As Data Engineer, you wi deveop, maintain, evauate and test big data soutions. You wi be invoved in the deveopment of data soutions using Spark Framework with Python or Scaa on Hadoop and AWS Coud Data Patform Responsibiities: Experienced in buiding data pipeines to Ingest, process, and transform data from fies, streams and databases. Process the data with Spark, Python, PySpark, Scaa, and Hive, Hbase or other NoSQL databases on Coud Data Patforms (AWS) or HDFS Experienced in deveop efficient software code for mutipe use cases everaging Spark Framework / using Python or Scaa and Big Data technoogies for various use cases buit on the patform Experience in deveoping streaming pipeines Experience to work with Hadoop / AWS eco system components to impement scaabe soutions to meet the ever-increasing data voumes, using big data/coud technoogies Apache Spark, Kafka, any Coud computing etc Required education Bacheor's Degree Preferred education Master's Degree Required technica and professiona expertise Minimum 4+ years of experience in Big Data technoogies with extensive data engineering experience in Spark / Python or Scaa ; Minimum 3 years of experience on Coud Data Patforms on AWS; Experience in AWS EMR / AWS Gue / DataBricks, AWS RedShift, DynamoDB Good to exceent SQL skis Exposure to streaming soutions and message brokers ike Kafka technoogies Preferred technica and professiona experience Certification in AWS and Data Bricks or Coudera Spark Certified deveopers
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20183 Jobs | Dublin
Wipro
10025 Jobs | Bengaluru
EY
8024 Jobs | London
Accenture in India
6531 Jobs | Dublin 2
Amazon
6260 Jobs | Seattle,WA
Uplers
6244 Jobs | Ahmedabad
Oracle
5916 Jobs | Redwood City
IBM
5765 Jobs | Armonk
Capgemini
3771 Jobs | Paris,France
Tata Consultancy Services
3728 Jobs | Thane