Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 8.0 years
11 - 16 Lacs
Mumbai, Chennai, Bengaluru
Work from Office
Your Role Should have extensively worked onMetadata, Rules & Memberlists inHFM. VB Scripting knowledge is mandatory. Understand and communicatethe consequences of changes made. Should have worked on Monthly/Quarterly/Yearly Validations. Should have worked on ICP accounts, Journals and Intercompany Reports. Should have worked on Data Forms & Data Grids. Should able to work on FDMEE Mappings. Should be fluent with FDMEE Knowledge. Should have worked on Financial Reporting Studio Your Profile Performing UAT with business on the CR's. Should have a to resolve business about theirHFMqueries(if any). Agile process knowledge will be an added advantage What youll love about working here You can shape yourcareer with us. We offer a range of career paths and internal opportunities within Capgemini group. You will also get personalized career guidance from our leaders. You will get comprehensive wellness benefits including health checks, telemedicine, insurance with top-ups, elder care, partner coverage or new parent support via flexible work. You will have theopportunity to learn on one of the industry's largest digital learning platforms, with access to 250,000+ courses and numerous certifications. Were committed to ensure that people of all backgrounds feel encouraged and have a sense of belonging at Capgemini. You are valued for who you are, and you canbring your original self to work . Every Monday, kick off the week with a musical performance by our in-house band - The Rubber Band. Also get to participate in internalsports events , yoga challenges, or marathons. At Capgemini, you can work oncutting-edge projects in tech and engineering with industry leaders or create solutions to overcome societal and environmental challenges. About Capgemini Location - Bengaluru,Chennai,Mumbai,Pune
Posted 1 month ago
0 years
0 Lacs
India
On-site
Job Description - Key Skills Strong Core Java (8+) & OOP fundamentals Apache Spark (RDD/DataFrame API, Spark SQL, performance tuning) Advanced SQL for large‑scale data manipulation and analytics Version control (Git), CI/CD pipelines, Unix/Linux scripting Good understanding of data‑modelling and ETL best practices Nice to have: experience with Hadoop/Hive, Kafka, cloud data services (AWS EMR, Databricks, GCP Dataproc, or Azure HDInsight), containerization (Docker/K8s). Responsibilities Design & Development: Build scalable data‑processing jobs in Java and Spark; refactor legacy MapReduce/SQL workloads into efficient Spark applications. Data Engineering: Develop robust ETL/ELT pipelines, implement data quality checks, and maintain metadata/catalogue integration. Performance Optimisation: Profile jobs, tune Spark configurations, and optimise SQL queries to meet SLAs on throughput and latency. Code Quality: Write clean, modular, and test‑covered code; enforce coding standards through peer reviews and automation in CI/CD. Collaboration: Liaise with architects, analysts, and cloud engineers to integrate data solutions with microservices, dashboards, and ML workflows. Monitoring & Support: Implement logging/alerting, troubleshoot production issues, and participate in on‑call rotations (as needed). Continuous Improvement: Evaluate new frameworks, contribute to PoCs, and share best practices to uplift team capability.
Posted 1 month ago
5.0 - 10.0 years
15 - 30 Lacs
Bengaluru
Work from Office
About the Team When 5% of Indian households shop with us, its important to build resilient systems to manage millions of orders every day. Weve done this with zero downtime! ?? Sounds impossible? Well, thats the kind of Engineering muscle that has helped Meesho become the e-commerce giant that it is today. We value speed over perfection and see failures as opportunities to become better. Weve taken steps to inculcate a strong Founders Mindset across our engineering teams, making us grow and move fast. We place special emphasis on the continuous growth of each team member - and we do this with regular 1-1s and open communication. As a Database Engineer II, you will be part of self-starters who thrive on teamwork and constructive feedback. We know how to party as hard as we work! If we arent building unparalleled tech solutions, you can find us debating the plot points of our favorite books and games or even gossiping over chai. So, if a day filled with building impactful solutions with a fun team sounds appealing to you, join us. About the Role As a Database Engineer II, youll establish and implement the best Nosql Database Engineering practices proactively. Youll have opportunities to work on different Nosql technologies on a large scale. Youll also work closely with other engineering teams and establish seamless collaborations within the organization. Being proficient in emerging technologies and the ability to work successfully with a team is key to success in this role. What you will do Manage, maintain and monitor a multitude of Relational/NoSQL databases clusters, ensuring obligations to SLAs. Manage both in-house and SaaS solutions in the Public cloud (Or 3rd party).Diagnose, mitigate and communicate database-related issues to relevant stakeholders. Design and Implement best practices for planning, provisioning, tuning, upgrading and decommissioning of database clusters. Understand the cost optimization aspects of such tools/softwares and implement cost control mechanisms along with continuous improvement. Advice and support product, engineering and operations teams. Maintain general backup/recovery/DR of data solutions. Work with the engineering and operations team to automate new approaches for scalability, reliability and performance. Perform R&D on new features and for innovative solutions. Participate in on-call rotations. What you will need 5 years+ experience in provisioning & managing Relational/NoSQL databases. Proficiency in two or more: Mysql,PostgreSql, Big Table ,Elastic Search, MongoDB, Redis, ScyllaDB. Proficiency in Python programming language. Experience with deployment orchestration, automation, and security configuration management (Jenkins, Terraform, Ansible). Hands-on experience with Amazon Web Services (AWS)/ Google Cloud Platform (GCP).Comfortable working in Linux/Unix environments. Knowledge of TCP/IP stack, Load balancer, Networking. Proven ability to drive projects to completion. A degree in computer science, software engineering, information technology or related fields will be an advantage.
Posted 1 month ago
5.0 - 10.0 years
22 - 27 Lacs
Pune, Bengaluru
Work from Office
Build ETL jobs using Fivetran and dbt for our internal projects and for customers that use various platforms like Azure, Salesforce and AWS technologies Build out data lineage artifacts to ensure all current and future systems are properly documented Required Candidate profile exp with a strong proficiency with SQL query/development skills Develop ETL routines that manipulate & transfer large volumes of data and perform quality checks Exp in healthcare industry with PHI/PII
Posted 1 month ago
2.0 - 6.0 years
6 - 10 Lacs
Nagpur
Work from Office
Primine Software Private Limited is looking for BigData Engineer to join our dynamic team and embark on a rewarding career journey Develop and maintain big data solutions. Collaborate with data teams and stakeholders. Conduct data analysis and processing. Ensure compliance with big data standards and best practices. Prepare and maintain big data documentation. Stay updated with big data trends and technologies.
Posted 1 month ago
6.0 - 9.0 years
5 - 9 Lacs
Hyderabad
Work from Office
We are looking for a highly skilled Data Engineer with 6 to 9 years of experience to join our team at BlackBaud, located in [location to be specified]. The ideal candidate will have a strong background in data engineering and excellent problem-solving skills. Roles and Responsibility Design, develop, and implement data pipelines and architectures to support business intelligence and analytics. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain large-scale data systems, ensuring scalability, reliability, and performance. Troubleshoot and resolve complex technical issues related to data engineering projects. Participate in code reviews and contribute to the improvement of the overall code quality. Stay up-to-date with industry trends and emerging technologies in data engineering. Job Requirements Strong understanding of data modeling, database design, and data warehousing concepts. Experience with big data technologies such as Hadoop, Spark, and NoSQL databases. Excellent programming skills in languages like Java, Python, or Scala. Strong analytical and problem-solving skills, with attention to detail and ability to work under pressure. Good communication and collaboration skills, with the ability to work effectively in a team environment. Ability to adapt to changing priorities and deadlines in a fast-paced IT Services & Consulting environment.
Posted 1 month ago
4.0 years
0 Lacs
Hyderābād
On-site
About the Role We are seeking a strong and passionate data engineer with experience in large-scale system implementation, with a focus on complex data pipelines. The candidate should be able to design and drive large projects from inception to production. The right person will work with cross-functional businesses', and technology partners to gather requirements and translate them into a data engineering roadmap. Must be a great communicator, standout teammate, and a technology powerhouse. What the Candidate Will Need / Bonus Points - What the Candidate Will Do - Collaborate with engineering/product/analyst teams across tech sites to collectively accomplish OKRs to take Uber forward Enrich data layers to effectively deal with the next generation of products which are a result of Uber's big bold bets Design and build data pipelines to schedule & orchestrate a variety of tasks such as extract, cleanse, transform, enrich & load data as per the business needs - Basic Qualifications - 4-years total technical software engineering experience in one or more of the following areas: Programming and scripting language (e.g. Python, SQL, Java/Scala) Big data frameworks (e.g. Spark, Flink, MR, Presto), data modeling, and writing ETLs Designing end-to-end data solutions and architecture - Preferred Qualifications - Strong SQL skills Strong in Data Warehousing and Data Modelling concepts Hands-on experience in Hadoop tech stack: HDFS, Hive, Oozie, Airflow, MapReduce, Spark. Programming languages - Python, Java, Scala, etc. Experience in building ETL Data Pipelines Performance Troubleshooting and Tuning
Posted 1 month ago
0 years
0 Lacs
India
On-site
Key Responsibilities: Lead and manage a backend/distributed systems team and third-party resources Build and optimize Java, MapReduce, Hive, and Spark jobs Work with a wide range of Hadoop tools : HDFS, Pig, Hive, HBase, Sqoop, Flume Implement and manage real-time stream processing using Spark Streaming, Storm Develop dimensional data models and perform advanced SQL tuning Analyze source data integrity and ensure accurate data ingestion Build dashboards and BI solutions using best practices Collaborate with internal teams and vendors to prioritize and deliver data initiatives Deploy, monitor, and audit big data models and workflows Technical Skills: Strong hands-on with Hadoop (Cloudera), Hive, Pig, Impala, Spark Programming in Java, Python, Scala , and scripting (Linux, Ruby, PHP) Experience with NoSQL and SQL databases : Cassandra, Postgres Familiarity with cloud services (Azure preferred) Exposure to machine learning/data science tools is a plus
Posted 1 month ago
5.0 - 10.0 years
22 - 27 Lacs
Chennai, Mumbai (All Areas)
Work from Office
Build ETL jobs using Fivetran and dbt for our internal projects and for customers that use various platforms like Azure, Salesforce and AWS technologies Build out data lineage artifacts to ensure all current and future systems are properly documented Required Candidate profile exp with a strong proficiency with SQL query/development skills Develop ETL routines that manipulate & transfer large volumes of data and perform quality checks Exp in healthcare industry with PHI/PII
Posted 1 month ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About The Role We are seeking a strong and passionate data engineer with experience in large-scale system implementation, with a focus on complex data pipelines. The candidate should be able to design and drive large projects from inception to production. The right person will work with cross-functional businesses', and technology partners to gather requirements and translate them into a data engineering roadmap. Must be a great communicator, standout teammate, and a technology powerhouse. What The Candidate Will Need / Bonus Points ---- What the Candidate Will Do ---- Collaborate with engineering/product/analyst teams across tech sites to collectively accomplish OKRs to take Uber forward Enrich data layers to effectively deal with the next generation of products which are a result of Uber's big bold bets Design and build data pipelines to schedule & orchestrate a variety of tasks such as extract, cleanse, transform, enrich & load data as per the business needs Basic Qualifications 4-years total technical software engineering experience in one or more of the following areas: Programming and scripting language (e.g. Python, SQL, Java/Scala) Big data frameworks (e.g. Spark, Flink, MR, Presto), data modeling, and writing ETLs Designing end-to-end data solutions and architecture Preferred Qualifications Strong SQL skills Strong in Data Warehousing and Data Modelling concepts Hands-on experience in Hadoop tech stack: HDFS, Hive, Oozie, Airflow, MapReduce, Spark. Programming languages - Python, Java, Scala, etc. Experience in building ETL Data Pipelines Performance Troubleshooting and Tuning
Posted 1 month ago
4.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description We are looking for a Big Data Engineer that will work on the collecting, storing, processing, and analyzing of huge sets of data. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them. You will also be responsible for integrating them with the architecture used across the company. Total Experience- 4 - 8 Years Responsibilities Selecting and integrating any Big Data tools and frameworks required to provide requested capabilities Implementing data wrangling, scarping, cleaning using both Java or Python Strong experience on data structure. Extensively work on API integration. Monitoring performance and advising any necessary infrastructure changes Defining data retention policies Skills And Qualifications Proficient understanding of distributed computing principles Proficient in Java or Pyhton and some part of machine learning Proficiency with Hadoop v2, MapReduce, HDFS,Pyspark,Spark Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala Experience with Spark Experience with integration of data from multiple data sources Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge of various ETL techniques and frameworks, such as Flume Experience with various messaging systems, such as Kafka or RabbitMQ Experience with Big Data ML toolkits, such as Mahout, SparkML, or H2O Good understanding of Lambda Architecture, along with its advantages and drawbacks Experience with Cloudera/MapR/Hortonworks Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 1 month ago
5.0 - 10.0 years
4 - 8 Lacs
Noida
Work from Office
We are looking for a skilled Data Software Engineer with 5 to 12 years of experience in Big Data and related technologies. The ideal candidate will have expertise in distributed computing principles, Apache Spark, and hands-on programming with Python. Roles and Responsibility Design and implement Big Data solutions using Apache Spark and other relevant technologies. Develop and maintain large-scale data processing systems, including stream-processing systems. Collaborate with cross-functional teams to integrate data from multiple sources, such as RDBMS, ERP, and files. Optimize performance of Spark jobs and troubleshoot issues. Lead a team efficiently and contribute to the development of Big Data solutions. Experience with native Cloud data services, such as AWS or AZURE Databricks. Job Expert-level understanding of distributed computing principles and Apache Spark. Hands-on programming experience with Python and proficiency with Hadoop v2, Map Reduce, HDFS, and Sqoop. Experience with building stream-processing systems using technologies like Apache Storm or Spark-Streaming. Good understanding of Big Data querying tools, such as Hive and Impala. Knowledge of ETL techniques and frameworks, along with experience with NoSQL databases like HBase, Cassandra, and MongoDB. Ability to work in an AGILE environment and lead a team efficiently. Strong understanding of SQL queries, joins, stored procedures, and relational schemas. Experience with integrating data from multiple sources, including RDBMS (SQL Server, Oracle), ERP, and files.
Posted 1 month ago
9.0 - 14.0 years
3 - 7 Lacs
Noida
Work from Office
We are looking for a skilled Data Engineer with 9 to 15 years of experience in the field. The ideal candidate will have expertise in designing and developing data pipelines using Confluent Kafka, ksqlDB, and Apache Flink. Roles and Responsibility Design and develop data pipelines for real-time and batch data ingestion and processing using Confluent Kafka, ksqlDB, and Apache Flink. Build and configure Kafka Connectors to ingest data from various sources, including databases, APIs, and message queues. Develop Flink applications for complex event processing, stream enrichment, and real-time analytics. Optimize ksqlDB queries for real-time data transformations, aggregations, and filtering. Implement data quality checks and monitoring to ensure data accuracy and reliability throughout the pipeline. Monitor and troubleshoot data pipeline performance, identifying bottlenecks and implementing optimizations. Job Bachelor's degree or higher from a reputed university. 8 to 10 years of total experience, with a majority related to ETL/ELT big data and Kafka. Proficiency in developing Flink applications for stream processing and real-time analytics. Strong understanding of data streaming concepts and architectures. Extensive experience with Confluent Kafka, including Kafka Brokers, Producers, Consumers, and Schema
Posted 1 month ago
5.0 - 7.0 years
4 - 8 Lacs
Hyderabad
Work from Office
We are looking for a skilled Hadoop Administrator with 5 to 7 years of experience in Hadoop Engineering, working on Python, Ansible, and DevOps methodologies. The ideal candidate will have extensive experience in CDPHDP Cluster and Server build, including Control nodes, Worker nodes, Edge nodes, and Data copy from cluster to cluster. Roles and Responsibility Design and implement scalable and efficient data processing systems using Hadoop technologies. Develop and maintain automation scripts using Python, Ansible, and other DevOps tools. Collaborate with cross-functional teams to identify and prioritize project requirements. Troubleshoot and resolve complex technical issues related to Hadoop clusters. Ensure high-quality standards for data processing and security. Participate in code reviews and contribute to the improvement of the overall codebase. Job Strong understanding of Hadoop ecosystem, including HDFS, MapReduce, and YARN. Experience with Linux operating system and scripting languages such as Bash or Python. Proficient in Shell scripting and YAML configuration files. Good technical design, problem-solving, and debugging skills. Understanding of CI/CD concepts and familiarity with GitHub, Jenkins, and Ansible. Hands-on development solutions using industry-leading Cloud technologies. Working knowledge of Git Ops and DevSecOps. Agile proficient and knowledgeable in other agile methodologies, ideally certified. Strong communication and networking skills. Ability to work autonomously and take accountability to execute and deliver on goals. Strong commitment to high-quality standards. Good communication skills and sense of ownership to work as an individual contributor.
Posted 1 month ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Introduction A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat. Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience Your Role And Responsibilities As an Associate Software Developer at IBM, you'll work with clients to co-create solutions to major real-world challenges by using best practice technologies, tools, techniques, and products to translate system requirements into the design and development of customized systems Preferred Education Master's Degree Required Technical And Professional Expertise Core Java, Spring Boot, Java2/EE, Microsservices - Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) Spark Good to have Python Preferred Technical And Professional Experience None
Posted 1 month ago
6.0 years
0 Lacs
India
On-site
Company Description DevoTrend IT is a global technology solutions provider leading the digitalization of private and public sectors. We deliver end-to-end digital transformation solutions and services, from ideation to deployment. Our offerings include IT & Software Consultancy Services, Resources Outsourcing Services, and Digital Transformation Consultancy, all aimed at driving innovative and productive experiences for our customers. With expertise in cloud, analytics, mobility, and various CRM/ERP platforms, we provide impactful and maintainable software solutions. Role Description This is a full-time hybrid role for a Snowflake Data Engineer and the locations are Pune, Mumbai, Chennai and Bangalore. The Snowflake Data Engineer will be responsible for designing, implementing, and managing data warehousing solutions on the Snowflake platform. Day-to-day tasks will include data modeling, building and managing ETL processes, and performing data analytics. The role requires close collaboration with cross-functional teams to ensure data integrity and optimal performance of the data infrastructure. Qualifications Build ETL (extract, transform, and loading) jobs using Fivetran and dbt for our internal projects and for customers that use various platforms like Azure, Salesforce, and AWS technologies • Monitoring active ETL jobs in production. • Build out data lineage artifacts to ensure all current and future systems are properly documented • Assist with the build out design/mapping documentation to ensure development is clear and testable for QA and UAT purposes • Assess current and future data transformation needs to recommend, develop, and train new data integration tool technologies • Discover efficiencies with shared data processes and batch schedules to help ensure no redundancy and smooth operations • Assist the Data Quality Analyst to implement checks and balances across all jobs to ensure data quality throughout the entire environment for current and future batch jobs. • Hands-on experience in developing and implementing large-scale data warehouses, Business Intelligence and MDM solutions, including Data Lakes/Data Vaults. SUPERVISORY RESPONSIBILITIES: • This job has no supervisory responsibilities. QUALIFICATIONS: • Bachelor's Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling or data engineering work • 3-5 year’s experience with a strong proficiency with SQL query/development skills • Develop ETL routines that manipulate and transfer large volumes of data and perform quality checks • Hands-on experience with ETL tools (e.g Informatica, Talend, dbt, Azure Data Factory) • Experience working in the healthcare industry with PHI/PII • Creative, lateral, and critical thinker • Excellent communicator • Well-developed interpersonal skills • Good at prioritizing tasks and time management • Ability to describe, create and implement new solutions • Experience with related or complementary open source software platforms and languages (e.g. Java, Linux, Apache, Perl/Python/PHP, Chef) • Knowledge / Hands-on experience with BI tools and reporting software (e.g. Cognos, Power BI, Tableau) • Big Data stack (e.g.Snowflake(Snowpark), SPARK, MapReduce, Hadoop, Sqoop, Pig, HBase, Hive, Flume)
Posted 1 month ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About Delhivery: Delhivery is India’s leading fulfillment platform for digital commerce. With a vast logistics network spanning 18,000+ pin codes and over 2,500 cities, Delhivery provides a comprehensive suite of services including express parcel transportation, freight solutions, reverse logistics, cross-border commerce, warehousing, and cutting-edge technology services. Since 2011, we’ve fulfilled over 550 million transactions and empowered 10,000+ businesses, from startups to large enterprises. Vision :To become the operating system for commerce in India by combining world-class infrastructure, robust logistics operations, and technology excellence . About the Role: Senior Data Enginee r We're looking for a Senior Data Engine er who can design, optimize, and own our high-throughput data infrastructure. You’ll work across batch and real-time pipelines, scale distributed processing on petabyte-scale data, and bring AI-assisted tooling into your workflow for debugging, testing, and documentatio n.This is a hands-on role where you'll work with a wide range of big data technologies (Spark, Kafka, Hive, Hudi/Iceberg, Databricks, EMR), data modeling best practices, and real-time systems to power analytics, data products, and machine learnin g.As a senior engineer, you'll review complex pipelines, manage SLAs, and mentor junior team members — while leveraging GenAI tools to scale your impac t. What You’ll Do:Build and optimize scalable batch and streaming data pipelines using Apache Spark, Kafka, Flink, Hive, and Airfl ow.Design and implement efficient data lake architectures with Hudi, Iceberg, or Delta for versioning, compaction, schema evolution, and time trav el.Architect and maintain cloud-native data systems (AWS EMR, S3, Glue, Lambda, Athena), focusing on cost, performance, and availabili ty.Model complex analytical and operational data workflows for warehouse and data lake environmen ts.Own pipeline observability — define and monitor SLAs, alerts, and lineage across batch and real-time syste ms.Debug performance bottlenecks across Spark, Hive, Kafka, and S3 — optimizing jobs with broadcast joins, file formats, resource configs, and partitioning strategi es.Leverage AI tools (e.g., Cursor AI, Copilot, Gemini, Windsurf) f or:Code generation and refactoring of DAGs or Spark j obsDebugging logs, stack traces, and SQL err orsGenerating tests for data pipeli nesDocumenting complex pipeline dependencies and architect ureCollaborate with product, analytics, data science, and platform teams to deliver end-to-end data produc ts.Mentor junior engineers and establish AI-native development workflows, including prompt libraries and automation best practic es. What We’re Looking For:Experience in building and maintaining large-scale data syst ems.Strong hands-on experience with Apache Spark, Kafka, Hive, and Airflow in product ion.Deep knowledge of Hadoop ecosystem (HDFS, YARN, MapReduce tuning, NameNode HA).Expert in SQL (windowing, recursive queries, tuning) and experience with NoSQL stores (e.g., DynamoDB, HBa se).Experience with trino/pr estoExperience with cloud-native data platforms — especially AWS Glue, S3 lifecycle policies, EMR, and Ath ena.Working knowledge of file formats and internals like Parquet, Avro, and best practices for efficient stor age.Familiarity with modern Lakehouse formats (Hudi, Iceberg, Delta Lake) and their compaction, versioning, and schema evolut ion.Hands-on experience managing Databricks or EMR.Solid grounding in data modeling, DWH design, and slowly changing dimensions (S CD).Strong programming in Python/Scala/Java, and ability to write clean, modular, testable c ode.Proficiency with CI/CD practices, Git, Jenkins/GitHub Actions for data engineering workfl ows.Bonus: Experience with distributed systems, consensus protocols, and real-time data guarant ees.Passion for AI-native engineering — using and evolving prompt-based workflows for greater efficiency and qual ity.
Posted 1 month ago
2.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description At Infoblox, every breakthrough begins with a bold “what if.” What if your ideas could ignite global innovation? What if your curiosity could redefine the future? We invite you to step into the next exciting chapter of your career journey. Bring your creativity, drive, your daring spirit, and feel what it’s like to thrive on a team big enough to make an impact, yet small enough to make a difference. Our cloud-first networking and security solutions already protect 70% of the Fortune 500 , and we’re looking for creative thinkers ready to push that influence even further. Join us and discover how far your bold “what if” can take the world, your community, and your career. Here, how we empower our people is extraordinary: Glassdoor Best Places to Work 2025, Great Place to Work-Certified in five countries, and Cigna Healthy Workforce honors three years running — and what we build is world-class: recognized as CybersecAsia’s Best in Critical Infrastructure 2024 —evidence that when first-class technology meets empowered talent, remarkable careers take shape. So, what if the next big idea, and the next great career story, comes from you? Become the force that turns every “what if” into “what’s next”. In a world where you can be anything, Be Infoblox . Data Engineer We have an opportunity for a Data Engineer to join our Cloud Engineering team in Pune, India, reporting to the senior manager of Software Engineering. In this role, you will develop platforms and products for Infoblox’s SaaS product line delivering next level networking for our customers. You will work closely with data scientists and product teams to curate and refine data powering our latest cloud products. Come join our growing Cloud Engineering team and help us build world class solutions. Be a Contributor — What You’ll Do Curate large-scale data from a multitude of sources into appropriate sets for research and development for the data scientists, threat analysts, and developers across the company Design, test, and implement storage solutions for various consumers of the data, especially data warehouses like ClickHouse and OpenSearch Design and implement mechanisms to monitor data sources over time for changes using summarization, monitoring, and statistical methods Design, develop, and maintain APIs that enable seamless data integration and retrieval processes for internal and external applications, and ensure these APIs are scalable, secure, and efficient to support high-volume data interactions Leverage computer science algorithms and constructs, including probabilistic data structures, to distill large data into sources of insight and enable future analytics Convert prototypes into production data engineering solutions through disciplined software engineering practices, Spark optimizations, and modern deployment pipelines Collaborate on design, implementation, and deployment of applications with the rest of Software Engineering Support data scientists and Product teams in building, debugging, and deploying Spark applications that best leverage data Build and maintain tools for automation, deployment, monitoring, and operations Create test plans, test cases, and run tests with automated tools Be Prepared — What You Bring 2+ years of experience in software development with programming languages such as Golang, Python, C, C++, C#, or Java Expertise in Big Data, including MapReduce, Spark Streaming, Kafka, Pub-Sub, and In-memory Database Experience with NoSQL databases such as OpenSearch/Clickhouse Good exposure in application performance tuning, memory management, and scalability Ability to design highly scalable distributed systems using different open-source technologies Experience in microservices development and container-based software using Docker/Kubernetes and other container technologies is a plus Experience with AWS, GCP, or Azure is a plus Experience building high-performance algorithms Bachelor’s degree in computer science, computer engineering, or electrical engineering is required, master’s degree preferred Be Successful — Your Path First 90 Days: Immerse in our culture, connect with mentors, and map the systems and stakeholders that rely on your work. Six Months: Deliver a signature win: ship a feature, close a marquee deal, launch a campaign, or roll out a game-changing process. One Year: Own your domain, mentor the next newcomer, and steer our roadmap with data-driven ideas Belong— Your Community Our culture thrives on inclusion, rewarding the bold ideas, curiosity, and creativity that move us forward. In a community where every voice counts, continuous learning is the norm. So, whether you code, create, sell, or care for customers, you’ll grow and belong here. Be Rewarded — Benefits That Help You Grow, Thrive, Belong Comprehensive health coverage, generous PTO, and flexible work options Learning opportunities, career-mobility programs, and leadership workshops Sixteen paid volunteer hours each year, global employee resource groups, and a “No Jerks” policy that keeps collaboration healthy Modern offices with EV charging, healthy snacks (and the occasional cupcake), plus hackathons, game nights, and culture celebrations Charitable Giving Program supported by Company Match We practice pay transparency and reward performance. Offers reflect role location, internal equity, experience, skills, education, and certifications. Ready to Be the Difference? Infoblox is an Affirmative Action and Equal Opportunity Employer, and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, sexual orientation, national origin, genetic information, age, disability, veteran status, or any other legally protected basis
Posted 1 month ago
5.0 years
5 Lacs
Hyderābād
On-site
Job description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist In this role, you will: Design and Develop ETL Processes: Lead the design and implementation of ETL processes using all kinds of batch/streaming tools to extract, transform, and load data from various sources into GCP. Collaborate with stakeholders to gather requirements and ensure that ETL solutions meet business needs. Data Pipeline Optimization: Optimize data pipelines for performance, scalability, and reliability, ensuring efficient data processing workflows. Monitor and troubleshoot ETL processes, proactively addressing issues and bottlenecks. Data Integration and Management: Integrate data from diverse sources, including databases, APIs, and flat files, ensuring data quality and consistency. Manage and maintain data storage solutions in GCP (e.g., BigQuery, Cloud Storage) to support analytics and reporting. GCP Dataflow Development: Write Apache Beam based Dataflow Job for data extraction, transformation, and analysis, ensuring optimal performance and accuracy. Collaborate with data analysts and data scientists to prepare data for analysis and reporting. Automation and Monitoring: Implement automation for ETL workflows using tools like Apache Airflow or Cloud Composer, enhancing efficiency and reducing manual intervention. Set up monitoring and alerting mechanisms to ensure the health of data pipelines and compliance with SLAs. Data Governance and Security: Apply best practices for data governance, ensuring compliance with industry regulations (e.g., GDPR, HIPAA) and internal policies. Collaborate with security teams to implement data protection measures and address vulnerabilities. Documentation and Knowledge Sharing: Document ETL processes, data models, and architecture to facilitate knowledge sharing and onboarding of new team members. Conduct training sessions and workshops to share expertise and promote best practices within the team. Requirements To be successful in this role, you should meet the following requirements: Experience: Minimum of 5 years of industry experience in data engineering or ETL development, with a strong focus on Data Stage and GCP. Proven experience in designing and managing ETL solutions, including data modeling, data warehousing, and SQL development. Technical Skills: Strong knowledge of GCP services (e.g., BigQuery, Dataflow, Cloud Storage, Pub/Sub) and their application in data engineering. Experience of cloud-based solutions, especially in GCP, cloud certified candidate is preferred. Experience and knowledge of Bigdata data processing in batch mode and streaming mode, proficient in Bigdata eco systems, e.g. Hadoop, HBase, Hive, MapReduce, Kafka, Flink, Spark, etc. Familiarity with Java & Python for data manipulation on Cloud/Bigdata platform. Analytical Skills: Strong problem-solving skills with a keen attention to detail. Ability to analyze complex data sets and derive meaningful insights. You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India
Posted 1 month ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist In this role, you will: Design and Develop ETL Processes: Lead the design and implementation of ETL processes using all kinds of batch/streaming tools to extract, transform, and load data from various sources into GCP. Collaborate with stakeholders to gather requirements and ensure that ETL solutions meet business needs. Data Pipeline Optimization: Optimize data pipelines for performance, scalability, and reliability, ensuring efficient data processing workflows. Monitor and troubleshoot ETL processes, proactively addressing issues and bottlenecks. Data Integration and Management: Integrate data from diverse sources, including databases, APIs, and flat files, ensuring data quality and consistency. Manage and maintain data storage solutions in GCP (e.g., BigQuery, Cloud Storage) to support analytics and reporting. GCP Dataflow Development: Write Apache Beam based Dataflow Job for data extraction, transformation, and analysis, ensuring optimal performance and accuracy. Collaborate with data analysts and data scientists to prepare data for analysis and reporting. Automation and Monitoring: Implement automation for ETL workflows using tools like Apache Airflow or Cloud Composer, enhancing efficiency and reducing manual intervention. Set up monitoring and alerting mechanisms to ensure the health of data pipelines and compliance with SLAs. Data Governance and Security: Apply best practices for data governance, ensuring compliance with industry regulations (e.g., GDPR, HIPAA) and internal policies. Collaborate with security teams to implement data protection measures and address vulnerabilities. Documentation and Knowledge Sharing: Document ETL processes, data models, and architecture to facilitate knowledge sharing and onboarding of new team members. Conduct training sessions and workshops to share expertise and promote best practices within the team. Requirements To be successful in this role, you should meet the following requirements: Experience: Minimum of 5 years of industry experience in data engineering or ETL development, with a strong focus on Data Stage and GCP. Proven experience in designing and managing ETL solutions, including data modeling, data warehousing, and SQL development. Technical Skills: Strong knowledge of GCP services (e.g., BigQuery, Dataflow, Cloud Storage, Pub/Sub) and their application in data engineering. Experience of cloud-based solutions, especially in GCP, cloud certified candidate is preferred. Experience and knowledge of Bigdata data processing in batch mode and streaming mode, proficient in Bigdata eco systems, e.g. Hadoop, HBase, Hive, MapReduce, Kafka, Flink, Spark, etc. Familiarity with Java & Python for data manipulation on Cloud/Bigdata platform. Analytical Skills: Strong problem-solving skills with a keen attention to detail. Ability to analyze complex data sets and derive meaningful insights. You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India
Posted 1 month ago
4.0 - 9.0 years
9 - 13 Lacs
Kolkata, Mumbai, New Delhi
Work from Office
Krazy Mantra Group of Companies is looking for Big Data Engineer to join our dynamic team and embark on a rewarding career journeyDesigning and implementing scalable data storage solutions, such as Hadoop and NoSQL databases.Developing and maintaining big data processing pipelines using tools such as Apache Spark and Apache Storm.Writing and testing data processing scripts using languages such as Python and Scala.Integrating big data solutions with other IT systems and data sources.Collaborating with data scientists and business stakeholders to understand data requirements and identify opportunities for data-driven decision making.Ensuring the security and privacy of sensitive data.Monitoring performance and optimizing big data systems to ensure they meet performance and availability requirements.Staying up-to-date with emerging technologies and trends in big data and data engineering.Mentoring junior team members and providing technical guidance as needed.Documenting and communicating technical designs, solutions, and best practices.Strong problem-solving and debugging skillsExcellent written and verbal communication skills
Posted 1 month ago
3.0 years
0 Lacs
Bengaluru
On-site
Voyager (94001), India, Bangalore, Karnataka Principal Associate - Data Engineer Do you love building and pioneering in the technology space? Do you enjoy solving complex business problems in a fast-paced, collaborative, inclusive, and iterative delivery environment? At Capital One, you'll be part of a big group of makers, breakers, doers and disruptors, who solve real problems and meet real customer needs. We are seeking Data Engineers who are passionate about marrying data with emerging technologies. As a Capital One Data Engineer, you’ll have the opportunity to be on the forefront of driving a major transformation within Capital One. What You’ll Do: Collaborate with and across Agile teams to design, develop, test, implement, and support technical solutions in full-stack development tools and technologies Work with a team of developers with deep experience in machine learning, distributed microservices, and full stack systems Utilize programming languages like Java, Scala, Python and Open Source RDBMS and NoSQL databases and Cloud based data warehousing services such as Redshift and Snowflake Share your passion for staying on top of tech trends, experimenting with and learning new technologies, participating in internal & external technology communities, and mentoring other members of the engineering community Collaborate with digital product managers, and deliver robust cloud-based solutions that drive powerful experiences to help millions of Americans achieve financial empowerment Perform unit tests and conduct reviews with other team members to make sure your code is rigorously designed, elegantly coded, and effectively tuned for performance Basic Qualifications: Bachelor’s Degree At least 3 years of experience in application development (Internship experience does not apply) At least 1 year of experience in big data technologies Preferred Qualifications: 5+ years of experience in application development including Python, SQL, Scala, or Java 2+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 3+ years experience with Distributed data/computing tools (MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL) 2+ years experience working on real-time data and streaming applications 2+ years of experience with NoSQL implementation (Mongo, Cassandra) 2+ years of data warehousing experience (Redshift or Snowflake) 3+ years of experience with UNIX/Linux including basic commands and shell scripting 2+ years of experience with Agile engineering practices At this time, Capital One will not sponsor a new applicant for employment authorization for this position. No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections 4901-4920; New York City’s Fair Chance Act; Philadelphia’s Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1-800-304-9102 or via email at RecruitingAccommodation@capitalone.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. For technical support or questions about Capital One's recruiting process, please send an email to Careers@capitalone.com Capital One does not provide, endorse nor guarantee and is not liable for third-party products, services, educational tools or other information available through this site. Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC).
Posted 1 month ago
5.0 - 10.0 years
7 - 14 Lacs
Pune
Work from Office
We are looking for a skilled Data Engineer with 5-10 years of experience to join our team in Pune. The ideal candidate will have a strong background in data engineering and excellent problem-solving skills. Roles and Responsibility Design, develop, and implement data pipelines and architectures. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain large-scale data systems and databases. Ensure data quality, integrity, and security. Optimize data processing and analysis workflows. Participate in code reviews and contribute to improving overall code quality. Job Requirements Strong proficiency in programming languages such as Python or Java. Experience with big data technologies like Hadoop or Spark. Knowledge of database management systems like MySQL or NoSQL. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a team environment. Strong communication and interpersonal skills. Notice period: Immediate joiners preferred.
Posted 1 month ago
3.0 - 6.0 years
5 - 9 Lacs
Chennai
Work from Office
We are looking for a skilled Hadoop Developer with 3 to 6 years of experience to join our team at IDESLABS PRIVATE LIMITED. The ideal candidate will have expertise in developing and implementing big data solutions using Hadoop technologies. Roles and Responsibility Design, develop, and deploy scalable big data applications using Hadoop. Collaborate with cross-functional teams to identify business requirements and develop solutions. Develop and maintain large-scale data processing systems using Hadoop MapReduce. Troubleshoot and optimize performance issues in existing Hadoop applications. Participate in code reviews to ensure high-quality code standards. Stay updated with the latest trends and technologies in big data development. Job Requirements Strong understanding of Hadoop ecosystem including HDFS, YARN, and Oozie. Experience with programming languages such as Java or Python. Knowledge of database management systems such as MySQL or NoSQL. Familiarity with agile development methodologies and version control systems like Git. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a team environment and communicate effectively with stakeholders.
Posted 1 month ago
3.0 - 5.0 years
9 - 13 Lacs
Bengaluru
Work from Office
At Allstate, great things happen when our people work together to protect families and their belongings from lifes uncertainties. And for more than 90 years our innovative drive has kept us a step ahead of our customers evolving needs. From advocating for seat belts, air bags and graduated driving laws, to being an industry leader in pricing sophistication, telematics, and, more recently, device and identity protection. This role is responsible for executing multiple tracks of work to deliver Big Data solutions enabling advanced data science and analytics. This includes working with the team on new Big Data systems for analyzing data; the coding & development of advanced analytics solutions to make/optimize business decisions and processes; integrating new tools to improve descriptive, predictive, and prescriptive analytics. This role contributes to the structured and unstructured Big Data / Data Science tools of Allstate from traditional to emerging analytics technologies and methods. The role is responsible for assisting in the selection and development of other team members. Key Responsibilities Participate in the development of moderately complex and occasionally complex technical solutions using Big Data techniques in data & analytics processes Develops innovative solutions within the team Participates in the development of moderately complex and occasionally complex prototypes and department applications that integrate Big Data and advanced analytics to make business decisions Uses new areas of Big Data technologies, (ingestion, processing, distribution) and research delivery methods that can solve business problems Understands the Big Data related problems and requirements to identify the correct technical approach Takes coaching from key team members to ensure efforts within owned tracks of work will meet their needs Executes moderately complex and occasionally complex functional work tracks for the team Partners with Allstate Technology teams on Big Data efforts Partners closely with team members on Big Data solutions for our data science community and analytic users Leverages and uses Big Data best practices / lessons learned to develop technical solutions Education 4 year Bachelors Degree (Preferred) Experience 2 or more years of experience (Preferred) Supervisory Responsibilities This job does not have supervisory duties. Education & Experience (in lieu) In lieu of the above education requirements, an equivalent combination of education and experience may be considered. Primary Skills Big Data Engineering, Big Data Systems, Big Data Technologies, Data Science, Influencing Others Shift Time Recruiter Info Annapurna Jhaajhat@allstate.com About Allstate The Allstate Corporation is one of the largest publicly held insurance providers in the United States. Ranked No. 84 in the 2023 Fortune 500 list of the largest United States corporations by total revenue, The Allstate Corporation owns and operates 18 companies in the United States, Canada, Northern Ireland, and India. Allstate India Private Limited, also known as Allstate India, is a subsidiary of The Allstate Corporation. The India talent center was set up in 2012 and operates under the corporations Good Hands promise. As it innovates operations and technology, Allstate India has evolved beyond its technology functions to be the critical strategic business services arm of the corporation. With offices in Bengaluru and Pune, the company offers expertise to the parent organizations business areas including technology and innovation, accounting and imaging services, policy administration, transformation solution design and support services, transformation of property liability service design, global operations and integration, and training and transition. Learn more about Allstate India here.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough