Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 5.0 years
5 - 7 Lacs
Pune
Work from Office
Key Responsibilities: Design, develop, and maintain scalable big data solutions Build and optimize data pipelines using tools like Spark, Kafka, and Hive Develop ETL processes to ingest and transform large volumes of data from multiple sources Collaborate with data scientists, analysts, and business stakeholders to support data needs Implement data quality, monitoring, and governance frameworks Optimize data storage and query performance on distributed systems Work with cloud-based platforms like AWS, Azure, or GCP for big data workloads Ensure data security, compliance, and privacy standards are met
Posted 1 month ago
2.0 - 5.0 years
6 - 10 Lacs
Hyderabad
Work from Office
Req ID: 329860 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Python API Developer to join our team in Hyderabad, Telangana (IN-TG), India (IN). "NTT DATA Services currently seeks Python API developer to join our team in Bangalore / Hyderabad The right candidate will have expert level experience in Python, Hadoop Big Data environment, Model testing, Model Deployment, Google Cloud Platform, Vertex AI and developing repeatable and predictable python based frameworks. The role will require the candidate to be the technical lead and agile developer for key model operationalization task. Responsibilities : 1. Design API and Batch applications 2. Develop Ingestion Pipelines from RDBMS to Data Lakes 3. AIML Model Testing & Model Deployment 4. Release coordination to enable safe and automated production deployments 5. Create application architecture and design documentation 6. Develop applications using agile methodologies 7. Act as agile SME for the team 8. Perform unit testing 9. Support application deployment to production 10. Interact with various teams to conduct daily work"
Posted 1 month ago
3.0 - 6.0 years
9 - 13 Lacs
Bengaluru
Work from Office
Req ID: 330827 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a GCP Data Engineering to join our team in Bengaluru, Karn taka (IN-KA), India (IN). Sr Developer - Data Engineering Mandatory Skills: Data Engineer, Data Bricks, Spark, Python, SQL, Azure/GCP JD: Proficiency in SQL and Python Experience with ETL tools Knowledge of cloud platforms (e.g., AWS, Azure, Google Cloud) Understanding of data warehousing concepts Understanding of big data technologies (e.g., Spark, BigQuery) Knowlege of GCP ecosystem tooling tools. General understanding of running ELT/ETL jobs. BigQuery ETL/ELT using GoogleSQL/Python. DataFlow pipelines. Airflow Orchestration.
Posted 1 month ago
3.0 - 5.0 years
25 - 30 Lacs
Bengaluru
Work from Office
Job Title Data Engineer Job Description Job Description Data Engineer !! Hello, we re IG Group. We re a global, FTSE 250-listed company made up of a collection of progressive fintech brands in the world of online trading and investing. The best part? We ve snapped up many awards for our top-class platforms, forward-thinking products, and incredible employee experiences. Who we re looking for: You re curious about things like the client experience, the rapid developments in tech, and the complex world of fintech regulation. You re also a confident, creative thinker with a knack for innovating. We know that you know every problem has a solution. Here, you can try new ideas, and lead the way in creating inspiring experiences for our clients and everyone around you. About the team: We are looking for a Data Engineer for our team in our Bangalore office. The role, as well as the projects in which you will participate on, is crucial for the entire IG. Data Engineering is responsible to collect data from various sources and generate insights for our business stakeholders. As a Data engineer you will be responsible to the delivery of our projects and participate in the whole project life cycle (development and delivery) applying Agile best practices and you will also ensure good quality engineering . You will be working other technical teams members to build ingestion pipeline, build a shared company-wide Data platform in GCP as well as supporting and evolving our wide range of services in the cloud You will be owning the development and support of our applications which also include our out-of-ours support rota. The Skills youll need: You will be someone who can demonstrate: Good understanding of IT development life cycle with focus on quality and continuous delivery and integration 3 - 5 years of experience in Python, Data processing - (pandas/pyspark), & SQL Good experience Cloud - GCP Good communications skills being able to communicate technical concepts to non-technical audience. Proven experience in working on Agile environments. Experience on working in data related projects from data ingestion to analytics and reporting. Good understanding of Big Data and distributed computes framework such as Spark for both batch and streaming workloads Familiar with kafka and different data formats AVRO/Parquet/ORC/Json. It would be great if you have experience on: GitLab Containerisation (Nomad or Kubernetes). How you ll grow: When you join IG Group, we want you to have more than a job - we want you to have a career. And you can. If you spot an opportunity, we want you to chase it. Stretch yourself, challenge your self-beliefs and go for the things you dream of. With internal and external learning opportunities and the tools to help you skyrocket to success, we ll support you all the way. And these opportunities truly are endless because we have some bold targets. We plan to expand our global presence, increase revenue growth, and ultimately deliver the world s best trading experience. We d love to have you along for the ride. The perks: It really is more than a job. We ll recognise your talent and make sure that you can still have a life - at work, and outside of it. Networks, committees, awards, sports and social clubs, mentorships, volunteering opportunities, extra time off the list goes on. Matched giving for your fundraising activity. Flexible working hours and work-from-home opportunities. Performance-related bonuses. Insurance and medical plans. Career-focused technical and leadership training. Contribution to gym memberships and more. A day off on your birthday. Two days volunteering leaves per year. Where you ll work: We follow a hybrid working model; we reckon it s the best of both worlds. This model also feeds into our secret ingredients for innovation: diversity, flexibility, and close connection. Plus, you ll be welcomed into a diverse and inclusive workforce with a lot of creative energy. Ask our employees what their favourite thing is about working at IG, and you ll hear an echo of our culture ! That s because you can come to work as your authentic self. The things that make you, you - like your ethnicity, sexual orientation, faith, age, gender identity/expression or physical capacity - can bring a fresh perspective or new skill to our business. That s why we welcome people from various walks of life; and anyone who wants to help us realise our vision and strategy. So, if you re keen to connect with our values, and lead the charge on innovation, you know what to do. Apply! Number of openings 0
Posted 1 month ago
3.0 - 5.0 years
5 - 9 Lacs
New Delhi, Ahmedabad, Bengaluru
Work from Office
We are seeking a skilled Big Data Developer with 3+ years of experience to develop, maintain, and optimize large-scale data pipelines using frameworks like Spark, PySpark, and Airflow. The role involves working with SQL, Impala, Hive, and PL/SQL for advanced data transformations and analytics, designing scalable data storage systems, and integrating structured and unstructured data using tools like Sqoop. The ideal candidate will collaborate with cross-functional teams to implement data warehousing strategies and leverage BI tools for insights. Proficiency in Python programming, workflow orchestration with Airflow, and Unix/Linux environments is essential. Location: Remote- Delhi / NCR,Bangalore/Bengaluru, Hyderabad/Secunderabad,Chennai, Pune,Kolkata,Ahmedabad,Mumbai
Posted 1 month ago
13.0 - 17.0 years
32 - 35 Lacs
Noida, Gurugram
Work from Office
Google Cloud Platform o GCS, DataProc, Big Query, Data Flow Programming Languages o Java, Scripting Languages like Python, Shell Script, SQL Google Cloud Platform o GCS, DataProc, Big Query, Data Flow 5+ years of experience in IT application delivery with proven experience in agile development methodologies 1 to 2 years of experience in Google Cloud Platform (GCS, DataProc, Big Query, Composer, Data Processing like Data Flow)
Posted 1 month ago
3.0 - 5.0 years
10 - 15 Lacs
Bengaluru
Work from Office
Google Cloud Platform o GCS, DataProc, Big Query, Data Flow Programming Languages o Java, Scripting Languages like Python, Shell Script, SQL Google Cloud Platform o GCS, DataProc, Big Query, Data Flow 5+ years of experience in IT application delivery with proven experience in agile development methodologies 1 to 2 years of experience in Google Cloud Platform (GCS, DataProc, Big Query, Composer, Data Processing like Data Flow)
Posted 1 month ago
4.0 - 7.0 years
15 - 17 Lacs
Hyderabad, Bengaluru
Work from Office
Design, develop, and implement data solutions using AWS Data Stack components such as Glue and Redshift.Write and optimize advanced SQL queries for data extraction, transformation, and analysis.Develop data processing workflows and ETL processes using Python and PySpark.
Posted 1 month ago
3.0 - 8.0 years
5 - 10 Lacs
Bengaluru
Work from Office
Position Summary... What youll do... Our Team International Data S cience team at Walmart Global Tech is focused on using the latest research in machine learning, statistics and optimization to solve business problems in assortment, pricing , sourcing, customer, supply chain, and replenishment areas for multiple countries within Walmart s global operations . We mine data, distill insights, extract information, build analytical models , deploy Machine Learning algorithms, and use the latest algorithms and technology to empower business decision-making. We work with engineers to build reference architectures and machine learning pipelines in a big data ecosystem to productize our solutions . Advanced analytical algorithms driven by our team will help Walmart to optimize business operations, business practices and change the way our customers shop. The data science community at Walmart Global Tech is active in most of the Hack events, utilizing the petabytes of data at our disposal, to build some of the coo lest ideas. All the work we do at Walmart Labs will eventually benefit our operations & our associates, helping Customers Save Money to Live Better. Your O pportunity As an Senior Data Scientist for Walmart Global Tech , you ll have the opportunity to Drive data-derived insights across the wide range of retail division s by developing advanced statistical models , machine learning algorithms and computational algorithms based on business initiatives Direct the gathering of data, assessing data validity and synthesizing data into large analytics datasets to support project goals Utilize big data analytics and advanced data science techniques to i dentify trends, patterns, and discrepancies in data. Determine additional data needed to support insights Build and t rain statistical models and machine learning algorithms for replication for future projects Communicate recommendations to business partners and influencing future plans based on insights What You Will Do Play a key role to solve complex problems, pivotal to Walmart s business and drive actionable i nsights from petabytes of data Utilize product mindset to build, scale and deploy holistic data science produc ts after successful prototyping Demonstrate incremental solution approach with agile and flexible ability to overcome practical problems Lead small teams and participate in data science project teams by serving as the technical lead for project Partner with senior team members to assess customer needs and define business questions Clearly articulate and present recommendations to business partners, and influence future plans based on i nsights Partner and engage with associates in other regions for delivering best services to the customers around the globe Work with customer centric mindset to deliver high quality business dr iven analytic solution Mentor peers and analysts across the divisi on in analytical best practices Drive innovation in approach, method, practices, process, outcome, delivery, or any component of end-to-end problem solving Promote and support company policies, procedures, mission, values, and standards of ethics and integrity Our Ideal Candidate You are a technical ly strong and high performing individual with excellent communication skills, proven analytical skill set and strong customer focus. You stay updated with latest research and technology ideas, and have a passion to utilize innovative ways to solve problems. You are a good story-teller and able to simply articulate the intricacies of your models as well as explain your results clearly to stakeholders. You have industry knowledge of the retail space, with keen interest in keeping up to date on the latest happenings in this space. What You will Bring Bachelors with > 7 years of relevant experience OR Masters with > 5 years of relevant experience OR PHD in Comp Science/ Statistics/Mathematics with > 3 years of relevant experience Experience in Analyzing the Complex Problems and translate it into data science algorithms Experience in machine learning , supervised and unsupervised: NLP, Classification, Data/Text Mining , Multi-modal supervised and unsupervised models , Neural Networks, Deep Learning Algorithms Experience in statistical learning: Predictive & Prescriptive Analytics, Web Analytics, Parametric and Non-parametric models, Regression, Time Series, Dynamic/Causal Model, Statistical Learning, Guided Decisions, Topic Modeling Experience with big data analytics - identifying trends, patterns, and outliers in large volumes of data Embedding generation from training materials, storage and retrieval from Vector Databases, set-up and provisioning of managed LLM gateways, development of Retrieval augmented generation based LLM agents, model selection, iterative prompt engineering and finetuning based on accuracy and user-feedback, monitoring and governance. Lead role mentoring multiple Jr. Analysts on approach and results . Strong Experience in Python , PySpark Google Cloud platform , Vertex AI, Kubeflow, model deployment Strong Experience with big data platforms - Hadoop (Hive, Map Reduce, HQL , Scala ) Additional Preferred Qualifications: Domain Knowledge of one or more divisions in Retail Published papers or given talks in leading academic and research journals Published papers or given talks in Data Science Forums Hold data science related patents Experience with big data platforms - Hadoop( Hive, Pig, Map Reduce, HQL) / Spark Experience in deep learning and w orked in TensorFlow and Torch Experience with GPU/C UDA for c omputational efficiency About Walmart Global Tech . . Flexible, hybrid work . Benefits . Belonging . . Minimum Qualifications... Minimum Qualifications:Option 1- Bachelors degree in Statistics, Economics, Analytics, Mathematics, Computer Science, Information Technology, or related field and 3 years experience in an analytics related field. Option 2- Masters degree in Statistics, Economics, Analytics, Mathematics, Computer Science, Information Technology, or related field and 1 years experience in an analytics related field. Option 3 - 5 years experience in an analytics or related field. Preferred Qualifications...
Posted 1 month ago
7.0 - 12.0 years
9 - 14 Lacs
Bengaluru
Work from Office
What youll do: As a Data Scientist for Walmart , youll have the opportunity to Drive data-derived insights across the wide range of retail divisions by developing advanced statistical models, machine learning algorithms and computational algorithms based on business initiatives Direct the gathering of data, assessing data validity and synthesizing data into large analytics datasets to support project goals Utilize big data analytics and advanced data science techniques to identify trends, patterns, and discrepancies in data. Determine additional data needed to support insights Build and train statistical models and machine learning algorithms for replication for future projects Communicate recommendations to business partners and influencing future plans based on insights What youll bring: Very good knowledge of the foundations of machine learning and statistics Hands on Experience in building and maintaining Gen AI powered solutions in production Experience in Analyzing the Complex Problems and translate it into data science algorithms Experience in machine learning, supervised and unsupervised and deep learning. Hands on experience in Computer Visions and NLP. Experience with big data analytics - identifying trends, patterns, and outliers in large volumes of data Strong Experience in Python with excellent knowledge of Data Structures Strong Experience with big data platforms Hadoop (Hive, Pig, Map Reduce, HQL, Scala, Spark) Hands on experience with Git Experience with SQL and relational databases, data warehouse Qualifications Bachelors with > 7 years of experience / Masters degree with > 5 years of experience. Educational qualifications should be preferably in Computer Science/Mathematics/Statistics or a related area. Experience should be relevant to the role. Good to have: Experience in ecommerce domain. Experience in R and Julia Demonstrated success in data science platforms like Kaggle.
Posted 1 month ago
7.0 - 12.0 years
9 - 14 Lacs
Gurugram
Work from Office
Amazon Payment Experience Platform team at Amazon India Development Center, Gurgaon is looking for a SDM to build the next generation of Payments platform and product from the ground up. This is a rare opportunity to be part of a team that will be responsible for building a successful, sustainable and strategic business for Amazon, from the ground up! You will get the opportunity to manage Tier-1 Platforms like Reminders, SMS Parsing and Bills and Recharge AutoPay systems .This team will work on diverse technology stack from SOA, UI frameworks, big-data and ML algorithms. The candidate will be working to shape the product and will be actively involved in defining key product features that impact the business. You will be responsible to set up and hold a high software quality bar in a highly technical team of Software Engineers. 7+ years of engineering experience 3+ years of engineering team management experience 8+ years of leading the definition and development of multi tier web services experience Knowledge of engineering practices and patterns for the full software/hardware/networks development life cycle, including coding standards, code reviews, source control management, build processes, testing, certification, and livesite operations Experience partnering with product or program management teams Experience in communicating with users, other technical teams, and senior leadership to collect requirements, describe software product features, technical designs, and product strategy Experience in recruiting, hiring, mentoring/coaching and managing teams of Software Engineers to improve their skills, and make them more effective, product software engineers
Posted 1 month ago
3.0 - 8.0 years
8 - 18 Lacs
Bengaluru
Work from Office
Sustainext is looking for a Data Analyst with 3+ years of experience in the SaaS space, with strong communication skills, and a passion for ESG to join our growing team. The ideal candidate will have hands-on experience working with data management, building ETL pipelines, and creating insights that drive informed decision-making around sustainability. If you are passionate about sustainability, have a strong background in SaaS data analysis, and want to make a tangible impact on the future of ESG, we encourage you to apply! Position Title : Data Analyst Reporting To : Head Sustainability & Product Design Location : Bangalore Role & responsibilities 1. Data Integration & Management: Collect, organize, and analyze large sets of ESG data from various sources using ETL (Extract, Transform, Load) pipelines. Design, build, and maintain efficient ETL processes to ensure smooth and accurate data flow. Integrate data from various sources, including APIs and internal systems, to support sustainability reporting. 2. Data Visualization and Dashboard Development: Create insightful, clear visualizations using tools like PowerBI, Tableau, Apache Superset, Zoho Analytics, or similar to support internal and client-facing reports. Collaborate with stakeholders to gather requirements and ensure dashboards meet user needs. Implement best practices in data visualization to enhance the clarity and impact of reports. 3. Metrics and Data Analysis: Work closely with the ESG research, product development, and client-facing teams to understand their data needs. Conduct detailed data analysis to identify trends, patterns, and insights related to ESG performance. Provide actionable recommendations based on data analysis to support decision-making. Prepare and present analysis findings to internal teams and stakeholders. 4. Quality Assurance and Testing: Prepare comprehensive data reports for internal stakeholders and clients, ensuring accuracy, relevance, and clarity. Perform thorough testing and validation of dashboards and reports to ensure accuracy and functionality. Continuously improve data quality and reporting processes through regular reviews and updates. Proactively suggest improvements to existing processes, tools, and methodologies. Required Technical Skills and Competencies 1. Data Analysis and Statistics: Proficiency in statistical analysis methods such as descriptive statistics, hypothesis testing, and regression analysis. Experience using statistical software packages such as R, Python (pandas, NumPy), or SPSS. 2. Data Visualization: Proficiency in data visualization tools such as Tableau, Power BI, or matplotlib for creating insightful and interactive visualizations. Ability to design clear and effective visualizations that communicate complex data patterns and insights. 3 . SQL and Database Management: Strong SQL skills for querying and manipulating data from relational databases. Experience working with database management systems (e.g., MySQL, PostgreSQL, SQL Server). 4. Excel and Spreadsheet Analysis: Advanced proficiency in Microsoft Excel for data manipulation, analysis, and reporting. Ability to use Excel functions, pivot tables, and macros to automate repetitive tasks. 5. Domain Knowledge in ESG: Understanding of environmental sustainability, social responsibility, and corporate governance principles. Familiarity with ESG reporting frameworks, sustainability metrics, and industry-specific challenges. 6. Critical Thinking and Problem-Solving: Strong analytical and problem-solving skills, with the ability to identify trends, patterns, and outliers in data. Ability to think critically and draw actionable insights from complex datasets. Desired Candidate profile Bachelor's degree in any field. Minimum 3 years of experience in a Data Analyst role. Demonstrated experience working with SaaS platforms. Proficiency with data visualization tools such as PowerBI, Tableau, Apache Superset, Zoho Analytics, or similar. Strong experience building ETL pipelines and managing data workflows. Knowledge of SQL and experience working with large datasets. Experience with API integration and data automation. Experience working with cloud-based data platforms such as AWS, Google Cloud, or Azure. Experience with scripting languages like Python or R for data manipulation. Excellent communication skills to collaborate with non-technical stakeholders and effectively present ESG data insights. Strong problem-solving skills and the ability to turn complex data into actionable insights. High level of accuracy and attention to detail when dealing with large datasets and sustainability metrics. Genuine interest in ESG and sustainability trends, with a desire to contribute to a company focused on driving positive environmental and social impact.
Posted 1 month ago
3.0 - 8.0 years
5 - 10 Lacs
Bengaluru
Work from Office
The Amazon Currency Converter team is responsible for developing the platform and applications used to introduce new and innovative payment methods to customers. We help Amazon expand globally by providing platform for FX (Foreign Exchange) and enabling payments in multiple currencies. The technology we build and operate varies widely, ranging from large scale Distributed Engineering incorporating the latest from Machine Learning in the Big Data space to customer and mobile friendly User Experiences. We are an agile team, moving quickly in collaboration with our business to bring new features to millions of Amazon customers while having fun and filing new inventions along the way. If you can think big and want to join a fast moving team breaking new ground at Amazon we would like to speak with you! Collaborate with experienced cross-disciplinary Amazonians to develop, design, and bring to market innovative devices and services Design and build innovative technologies in a large distributed computing environment and help lead fundamental changes in the industry Create solutions to run predictions on distributed systems with exposure to technologies at incredible scale and speed Build distributed storage, index, and query systems that are scalable, fault-tolerant, low cost, and easy to manage/use 3+ years of non-internship professional software development experience 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience Experience programming with at least one software programming language 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience Bachelors degree in computer science or equivalent
Posted 1 month ago
5.0 - 10.0 years
12 - 15 Lacs
Gurugram, Ahmedabad
Work from Office
We are seeking a highly skilled GCP Data Engineer with experience in designing and developing data ingestion frameworks, real-time processing solutions, and data transformation frameworks using open-source tools. The role involves operationalizing open-source data-analytic tools for enterprise use, ensuring adherence to data governance policies, and performing root-cause analysis on data-related issues. The ideal candidate should have a strong understanding of cloud platforms, especially GCP, with hands-on expertise in tools such as Kafka, Apache Spark, Python, Hadoop, and Hive. Experience with data governance and DevOps practices, along with GCP certifications, is preferred.
Posted 1 month ago
5.0 - 9.0 years
12 - 16 Lacs
Mumbai, New Delhi, Bengaluru
Work from Office
We are seeking a skilled ETL Data Tester to join our dynamic team on a 6-month contract. The ideal candidate will focus on implementing ETL processes, creating comprehensive test suites using Python, and validating data quality through advanced SQL queries. The role involves collaborating with Data Scientists, Engineers, and Software teams to develop and monitor data tools, frameworks, and infrastructure changes. Proficiency in Hive QL, Spark QL, and Big Data concepts is essential. The candidate should also have experience in data testing tools like DBT, iCEDQ, and QuerySurge, along with expertise in Linux/Unix and messaging systems such as Kafka or RabbitMQ. Strong analytical and debugging skills are required, with a focus on continuous automation and integration of data from multiple sources. Location: Chennai, Ahmedabad, Kolkata, Pune, Hyderabad, Remote
Posted 1 month ago
5.0 - 6.0 years
8 - 14 Lacs
Hyderabad
Work from Office
- Architect and optimize distributed data processing pipelines leveraging PySpark for high-throughput, low-latency workloads. - Utilize the Apache big data stack (Hadoop, Hive, HDFS) to orchestrate ingestion, transformation, and governance of massive datasets. - Engineer fault-tolerant, production-grade ETL frameworks ensuring seamless scalability and system resilience. - Interface cross-functionally with Data Scientists and domain experts to translate analytical needs into performant data solutions. - Enforce rigorous data quality controls and lineage mechanisms to uphold auditability and regulatory compliance. - Contribute to core architectural design, implement clean and modular Python/Java code, and drive performance benchmarking at scale. Required Skills : - 5-7 years of experience. - Strong hands-on experience with PySpark for distributed data processing. - Deep understanding of Apache ecosystem (Hadoop, Hive, Spark, HDFS, etc.) - Solid grasp of data warehousing, ETL principles, and data modeling. - Experience working with large-scale datasets and performance optimization. - Familiarity with SQL and NoSQL databases. - Proficiency in Python and basic to intermediate knowledge of Java. - Experience in using version control tools like Git and CI/CD pipelines. Nice-to-Have Skills : - Working experience with Apache NiFi for data flow orchestration. - Experience in building real-time streaming data pipelines. - Knowledge of cloud platforms like AWS, Azure, or GCP. - Familiarity with containerization tools like Docker or orchestration tools like Kubernetes. If you are interested in the above roles and responsibilities, please share your updated resume along with the following details : Name as per Aadhar card.: Mobile Number : Alternative Mobile : Mail ID : Alternative Mail ID : Date of Birth : Total EXP : Relevant EXP : Current CTC : ECTC : Notice period(LWD) : Updated resume : Holding Offer(If any) : Interview availability : PF / UAN Number : Any Career /Education Gap:
Posted 1 month ago
10.0 - 15.0 years
35 - 40 Lacs
Noida
Work from Office
Description: We are seeking a seasoned Manager – Data Engineering with strong experience in Databricks or the Apache data stack to lead complex data platform implementations. You will be responsible for leading high-impact data engineering engagements for global clients, delivering scalable solutions, and driving digital transformation. Requirements: Required Skills & Experience: • 12–18 years of total experience in data engineering, including 3–5 years in a leadership/managerial role. • Hands-on experience in Databricks OR core Apache stack – Spark, Kafka, Hive, Airflow, NiFi, etc. • Expertise in one or more cloud platforms: AWS, Azure, or GCP – ideally with Databricks on cloud. • Strong programming skills in Python, Scala, and SQL. • Experience in building scalable data architectures, delta lakehouses, and distributed data processing. • Familiarity with modern data governance, cataloging, and data observability tools. • Proven experience managing delivery in an onshore-offshore or hybrid model. • Strong communication, stakeholder management, and team mentoring capabilities. Job Responsibilities: Key Responsibilities: • Lead the architecture, development, and deployment of modern data platforms using Databricks, Apache Spark, Kafka, Delta Lake, and other big data tools. • Design and implement data pipelines (batch and real-time), data lakehouses, and large-scale ETL frameworks. • Own delivery accountability for data engineering programs across BFSI, telecom, healthcare, or manufacturing clients. • Collaborate with global stakeholders, product owners, architects, and business teams to understand requirements and deliver data-driven outcomes. • Ensure best practices in DevOps, CI/CD, infrastructure-as-code, data security, and governance. • Manage and mentor a team of 10–25 engineers, conducting performance reviews, capability building, and coaching. • Support presales activities including solutioning, technical proposals, and client workshops What We Offer: Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them. Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global centers or client facilities! Work-Life Balance: GlobalLogic prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training(GL Vantage, Toast Master),Stress Management program, professional certifications, and technical and soft skill trainings. Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Group Term Life Insurance, Group Personal Accident Insurance , NPS(National Pension Scheme ), Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses. Fun Perks: We want you to love where you work, which is why we host sports events, cultural activities, offer food on subsidies rates, Corporate parties. Our vibrant offices also include dedicated GL Zones, rooftop decks and GL Club where you can drink coffee or tea with your colleagues over a game of table and offer discounts for popular stores and restaurants!
Posted 1 month ago
5.0 - 8.0 years
7 - 12 Lacs
Chennai
Work from Office
Role Purpose The purpose of this role is to provide solutions and bridge the gap between technology and business know-how to deliver any client solution Experience of Full-Stack Automation in Banking domain. Excellent in Automation testing concepts, principles, and modules. Excellent in Core JAVA Concepts and implementation. Proficient in SQL and experience of working with multiple databases - Oracle, PostgreSQL & HIVE. Experience of automating middleware systems involving messages and file systems. Automation Testing experience using Cucumber & BDD. Exposure to Big Data platform and technologies. Strong problem-solving skills and logical thinking. Self-starter with appetite to explore and learn new technologies. Good communication skills. Banking domain Knowledge Programming Languages - Core JAVA, UNIX Basic Scripting for Scheduling tasks Database SQL Basic Queries Automation Selenium/BDD/Cucumber Configuration Management - GIT/BitBucket or equivalent Deliver No. Performance Parameter Measure 1. Contribution to customer projects Quality, SLA, ETA, no. of tickets resolved, problem solved, # of change requests implemented, zero customer escalation, CSAT 2. Automation Process optimization, reduction in process/ steps, reduction in no. of tickets raised 3. Skill upgradation # of trainings & certifications completed, # of papers, articles written in a quarter Mandatory Skills: Selenium. Experience5-8 Years.
Posted 1 month ago
3.0 - 6.0 years
8 - 17 Lacs
Chennai, Bengaluru, Mumbai (All Areas)
Work from Office
Role & responsibilities Duration : 6 Months + Extendable Job Locations: Any Protiviti Preferred/mandatory Skills: Role Focus: Support UAT and data validation during migration from legacy Cloudera to a big modern on-prem data platform. Core Tasks: Execute UAT, validate data pipelines (Hive, Impala, Spark, CDSW), perform quality checks, write SQL queries, and document test cases. Must-Have Skills: 3+ years in big data UAT/QA, strong SQL, experience with Cloudera tools, data validation, and platform migration exposure. Nice to Have: PySpark, Jupyter, data governance knowledge, telecom, or large enterprise experience. Preferred candidate profile
Posted 1 month ago
5.0 - 8.0 years
9 - 14 Lacs
Hyderabad
Work from Office
Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLAs defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers and clients business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLAs Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks Deliver NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT2Team ManagementProductivity, efficiency, absenteeism3Capability developmentTriages completed, Technical Test performance Mandatory Skills: DataBricks - Data Engineering. Experience5-8 Years.
Posted 1 month ago
5.0 - 8.0 years
7 - 12 Lacs
Bengaluru
Work from Office
Role Purpose The purpose of this role is to provide solutions and bridge the gap between technology and business know-how to deliver any client solution Experience of Full-Stack Automation in Banking domain. Excellent in Automation testing concepts, principles, and modules. Excellent in Core JAVA Concepts and implementation. Proficient in SQL and experience of working with multiple databases - Oracle, PostgreSQL & HIVE. Experience of automating middleware systems involving messages and file systems. Automation Testing experience using Cucumber & BDD. Exposure to Big Data platform and technologies. Strong problem-solving skills and logical thinking. Self-starter with appetite to explore and learn new technologies. Good communication skills. Banking domain Knowledge Programming Languages - Core JAVA, UNIX Basic Scripting for Scheduling tasks Database SQL Basic Queries Automation Selenium/BDD/Cucumber Configuration Management - GIT/BitBucket or equivalent Deliver No. Performance Parameter Measure 1. Contribution to customer projects Quality, SLA, ETA, no. of tickets resolved, problem solved, # of change requests implemented, zero customer escalation, CSAT 2. Automation Process optimization, reduction in process/ steps, reduction in no. of tickets raised 3. Skill upgradation # of trainings & certifications completed, # of papers, articles written in a quarter Mandatory Skills: Selenium. Experience5-8 Years.
Posted 1 month ago
3.0 - 5.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Role Purpose The purpose of this role is to prepare test cases and perform testing of the product/ platform/ solution to be deployed at a client end and ensure its meet 100% quality assurance parameters. Experience of Full-Stack Automation in Banking domain. Excellent in Automation testing concepts, principles, and modules. Excellent in Core JAVA Concepts and implementation. Proficient in SQL and experience of working with multiple databases - Oracle, PostgreSQL & HIVE. Experience of automating middleware systems involving messages and file systems. Automation Testing experience using Cucumber & BDD. Exposure to Big Data platform and technologies. Strong problem-solving skills and logical thinking. Self-starter with appetite to explore and learn new technologies. Good communication skills. Must have Banking domain Knowledge Programming Languages - Core JAVA, UNIX Basic Scripting for Scheduling tasks Database SQL Basic Queries Automation Selenium/BDD/Cucumber Configuration Management - GIT/BitBucket or equivalent NoPerformance ParameterMeasure1Understanding the test requirements and test case design of the productEnsure error free testing solutions, minimum process exceptions, 100% SLA compliance, # of automation done using VB, macros2Execute test cases and reportingTesting efficiency & quality, On-Time Delivery, Troubleshoot queries within TAT, CSAT score Mandatory Skills: Selenium. Experience3-5 Years.
Posted 1 month ago
6.0 - 11.0 years
35 - 50 Lacs
Bengaluru
Hybrid
Position - MLOPS Engineer / Lead Experience - 5 to 15 years Location - Bangalore Job description Key Responsibilities Data Architecture Design: Lead the design and implementation of scalable data architectures that support both data processing and machine learning workflows. ETL Development: Develop and manage ETL (Extract, Transform, Load) processes to ensure efficient data ingestion, cleaning, and transformation for machine learning applications. Model Development: Collaborate with data scientists to design, build, and deploy machine learning models that address business needs and improve operational efficiency. Performance Monitoring: Monitor the performance of data pipelines and machine learning models in production, making necessary adjustments to optimize performance and accuracy. Team Leadership: Mentor and guide a team of data engineers and machine learning engineers, fostering a collaborative environment focused on continuous improvement. Cross-Functional Collaboration: Work closely with product managers, software engineers, UI/UX and other stakeholders to align data initiatives with business objectives. Required Skills Technical Proficiency: Strong programming skills in languages such as Python; experience with big data technologies is essential. Machine Learning Knowledge: Understanding of machine learning algorithms, frameworks, and best practices for model development and deployment. Data Management Expertise: Proficiency in database management systems (SQL and NoSQL), data modeling, and ETL tools. Cloud Computing Experience: Familiarity with cloud platforms (AWS, Google Cloud, Azure) for deploying data solutions and machine learning models. Leadership Skills: Excellent communication and interpersonal skills to lead teams effectively and collaborate across departments
Posted 1 month ago
6.0 - 11.0 years
8 - 17 Lacs
Hyderabad, Bengaluru
Work from Office
Working Hours : 2PM to 11PM IST Mid-Level ML Engineers / Data Scientist Role : (4-5 years of experience ) - Experience processing, filtering, and presenting large quantities (100K to Millions of rows) of data using Pandas and PySpark - Experience with statistical analysis, data modeling, machine learning, optimizations, regression modeling and forecasting, time series analysis, data mining, and demand modeling. - Experience applying various machine learning techniques and understanding the key parameters that affect their performance. - Experience with Predictive analytics (e.g., forecasting, time-series, neural networks) and Prescriptive analytics (e.g., stochastic optimization, bandits, reinforcement learning). - Experience with Python and Python packages like NumPy, Pandas and deep learning frameworks like TensorFlow, Pytorch and Keras - Experience in Big Data ecosystem with frameworks like Spark, PySpark , Unstructured DBs like Elasticsearch and MongoDB - Proficiency with TABLEAU or other web-based interfaces to create graphic-rich customizable plots, charts data maps etc. - Able to write SQL scripts for analysis and reporting (Redshift, SQL, MySQL). - Previous experience in ML, data scientist or optimization engineer role with a large technology company. - Experience in an operational environment developing, fast-prototyping, piloting, and launching analytic products. - Ability to develop experimental and analytic plans for data modeling processes, use of strong baselines, ability to accurately determine cause and effect relations. - Experience in creating data driven visualizations to describe an end-to-end system. - Excellent written and verbal communication skills. The role requires effective communication with colleagues from computer science, operations research, and business backgrounds. - Bachelors or Masters in Artificial Intelligence, Computer Science, Statistics, Applied Math, or a related field.
Posted 1 month ago
4.0 - 9.0 years
0 - 1 Lacs
Bengaluru
Hybrid
Hiring Data Engineers – Bangalore/ Hybrid FTE/C to H Immediate Joiners required / Notice period within 30 Days Interested Candidates can share their CV
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39973 Jobs | Dublin
Wipro
19601 Jobs | Bengaluru
Accenture in India
16747 Jobs | Dublin 2
EY
15791 Jobs | London
Uplers
11569 Jobs | Ahmedabad
Amazon
10606 Jobs | Seattle,WA
Oracle
9430 Jobs | Redwood City
IBM
9385 Jobs | Armonk
Accenture services Pvt Ltd
8587 Jobs |
Capgemini
7916 Jobs | Paris,France