Jobs
Interviews

3330 Big Data Jobs - Page 44

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 7.0 years

4 - 9 Lacs

Pune

Work from Office

With 80,000 customers across 150 countries, UKG is the largest U.S.-based private software company in the world. And we’re only getting started. Ready to bring your bold ideas and collaborative mindset to an organization that still has so much more to build and achieveRead on. Here, we know that you’re more than your work. That’s why our benefits help you thrive personally and professionally, from wellness programs and tuition reimbursement to U Choose — a customizable expense reimbursement program that can be used for more than 200+ needs that best suit you and your family, from student loan repayment, to childcare, to pet insurance. Our inclusive culture, active and engaged employee resource groups, and caring leaders value every voice and support you in doing the best work of your career. If you’re passionate about our purpose — people —then we can’t wait to support whatever gives you purpose. We’re united by purpose, inspired by you. We are looking for a talented and experienced Sr Software Engineer II to join our dynamic team. This role will provide you with the opportunity to work on cutting-edge SaaS technologies and impactful projects that are used by enterprises and users worldwide. As a Software Engineer II, you will be involved in the design, development, testing, deployment, and maintenance of software solutions. You will work in a collaborative environment, contributing to the technical foundation behind our flagship products and services.We are seeking engineers with diverse specialties and skills to join our dynamic team to innovate and solve complex challenges. Our team is looking for strong talent with expertise in the following areas: Front End UI Engineer (UI/UX design principles, responsive design, JavaScript frameworks) DevOps Engineer (CI/CD Pipelines, IAC proficiency, Containerization/Orchestration, Cloud Platforms) Back End Engineer (API Development, Database Management, Security Practices, Message Queuing) AI/ML Engineer (Machine Learning Frameworks, Data Processing, Algorithm Development, Big Data Technologies, Domain Knowledge)Responsibilities: Software DevelopmentWrite clean, maintainable, and efficient code or various software applications and systems. Design and ArchitectureParticipate in design reviews with peers and stakeholders Code ReviewReview code developed by other developers, providing feedback adhering to industry standard best practices like coding guidelines TestingBuild testable software, define tests, participate in the testing process, automate tests using tools (e.g., Junit, Selenium) and Design Patterns leveraging the test automation pyramid as the guide. Debugging and TroubleshootingTriage defects or customer reported issues, debug and resolve in a timely and efficient manner. Service Health and QualityContribute to health and quality of services and incidents, promptly identifying and escalating issues. Collaborate with the team in utilizing service health indicators and telemetry for action. Assist in conducting root cause analysis and implementing measures to prevent future recurrences. Dev Ops ModelUnderstanding of working in a DevOps Model. Begin to take ownership of working with product management on requirements to design, develop, test, deploy and maintain the software in production. DocumentationProperly document new features, enhancements or fixes to the product, and also contribute to training materials.Basic Qualifications: Bachelor’s degree in Computer Science, Engineering, or a related technical field, or equivalent practical experience. 2+ years of professional software development experience. Proficiency in one or more programming languages such as C, C++, C#, .NET, Python, Java, or JavaScript. Experience with software development practices and design patterns. Familiarity with version control systems like Git GitHub and bug/work tracking systems like JIRA. Basic understanding of cloud technologies and DevOps principles. Strong analytical and problem-solving skills, with a proven track record of building and shipping successful software products and services.Preferred Qualifications: Experience with cloud platforms like Azure, AWS, or GCP. Experience with test automation frameworks and tools. Knowledge of agile development methodologies. Commitment to continuous learning and professional development. Good communication and interpersonal skills, with the ability to work effectively in a collaborative team environment. Where we’re going UKG is on the cusp of something truly special. Worldwide, we already hold the #1 market share position for workforce management and the #2 position for human capital management. Tens of millions of frontline workers start and end their days with our software, with billions of shifts managed annually through UKG solutions today. Yet it’s our AI-powered product portfolio designed to support customers of all sizes, industries, and geographies that will propel us into an even brighter tomorrow! Disability Accommodation UKGCareers@ukg.com

Posted 1 month ago

Apply

4.0 - 9.0 years

6 - 11 Lacs

Noida

Work from Office

With 80,000 customers across 150 countries, UKG is the largest U.S.-based private software company in the world. And were only getting started. Ready to bring your > Here, we know that youre more than your work. Thats why our benefits help you thrive personally and professionally, from wellness programs and tuition reimbursement to U Choose a customizable expense reimbursement program that can be used for more than 200+ needs that best suit you and your family, from student loan repayment, to childcare, to pet insurance. Our inclusive culture, active and engaged employee resource groups, and caring leaders value every voice and support you in doing the best work of your career. If youre passionate about our purpose people then we cant wait to support whatever gives you purpose. Were united by purpose, inspired by you. We are looking for a Senior Software Engineer to join our dynamic team. This role provides an opportunity to lead projects and contribute to high-impact software solutions that are used by enterprises and users worldwide. As a Senior Software Engineer, you will be responsible for the design, development, testing, deployment, and maintenance of complex software systems, as well as mentoring junior engineers. You will work in a collaborative environment, contributing to the technical foundation behind our flagship products and services.We are seeking engineers with diverse specialties and skills to join our dynamic team to innovate and solve complex challenges. Our team is looking for strong talent with expertise in the following areas: Front End UI Engineer (UI/UX design principles, responsive design, JavaScript frameworks) DevOps Engineer (CI/CD Pipelines, IAC proficiency, Containerization/Orchestration, Cloud Platforms) Back End Engineer (API Development, Database Management, Security Practices, Message Queuing) AI/ML Engineer (Machine Learning Frameworks, Data Processing, Algorithm Development, Big Data Technologies, Domain Knowledge) Responsibilities: Software DevelopmentWrite clean, maintainable, and efficient code or various software applications and systems. Technical LeadershipContribute to the design, development, and deployment of complex software applications and systems, ensuring they meet high standards of quality and performance. Project ManagementManage execution and delivery of features and projects, negotiating project priorities and deadlines, ensuring successful and timely completion, with quality. Architectural DesignParticipate in design reviews with peers and stakeholders and in the architectural design of new features and systems, ensuring scalability, reliability, and maintainability. Code ReviewDiligent about reviewing code developed by other developers, provide feedback and maintain a high bar of technical excellence to ensure code is adhering to industry standard best practices like coding guidelines, elegant, efficient and maintainable code, with observability built from ground up, unit tests etc. TestingBuild testable software, define tests, participate in the testing process, automate tests using, tools (e.g., Junit, Selenium) and Design Patterns leveraging the test automation pyramid as the guide. Service Health and QualityMaintain the health and quality of services and incidents, proactively identifying and resolving issues. Utilize service health indicators and telemetry for action providing recommendations to optimize performance. Conduct thorough root cause analysis and drive the implementation of measures to prevent future recurrences. Dev Ops ModelUnderstanding of working in a DevOps Model.Taking ownership from working with product management on requirements to design, develop, test, deploy and maintain the software in production. DocumentationProperly document new features, enhancements or fixes to the product, contributing to training materials.Minimum Qualifications: Bachelors degree in Computer Science, Engineering, or a related technical field, or equivalent practical experience. 4+ years of professional software development experience. Deep expertise in one or more programming languages such as C, C++, C#, .NET, Python, Java, or JavaScript. Extensive experience with software development practices and design patterns. Proficiency with version control systems like GitHub and bug/work tracking systems like JIRA. Understanding of cloud technologies and DevOps principles.Preferred Qualifications: Experience with cloud platforms like Azure, AWS, or GCP. Familiarity with CI/CD pipelines and automation tools. Experience with test automation frameworks and tools. Knowledge of agile development methodologies. Familiarity with developing accessible technologies. Dedicated to diversity and inclusion initiatives. Excellent communication and interpersonal skills, with the ability to work effectively in a collaborative team environment.

Posted 1 month ago

Apply

6.0 - 10.0 years

6 - 10 Lacs

Greater Noida

Work from Office

SQL DEVELOPER: Design and implement relational database structures optimized for performance and scalability. Develop and maintain complex SQL queries, stored procedures, triggers, and functions. Optimize database performance through indexing, query tuning, and regular maintenance. Ensure data integrity, consistency, and security across multiple environments. Collaborate with cross-functional teams to integrate SQL databases with applications and reporting tools. Develop and manage ETL (Extract, Transform, Load) processes for data ingestion and transformation. Monitor and troubleshoot database performance issues. Automate routine database tasks using scripts and tools. Document database architecture, processes, and procedures for future reference. Stay updated with the latest SQL best practices and database technologies.Data Retrieval: SQL Developers must be able to query large and complex databases to extract relevant data for analysis or reporting. Data Transformation: They often clean, join, and reshape data using SQL to prepare it for downstream processes like analytics or machine learning. Performance Optimization: Writing queries that run efficiently is key, especially when dealing with big data or real-time systems. Understanding of Database Schemas: Knowing how tables relate and how to navigate normalized or denormalized structures is essential. QE: Design, develop, and execute test plans and test cases for data pipelines, ETL processes, and data platforms. Validate data quality, integrity, and consistency across various data sources and destinations. Automate data validation and testing using tools such as PyTest, Great Expectations, or custom Python/SQL scripts. Collaborate with data engineers, analysts, and product managers to understand data requirements and ensure test coverage. Monitor data pipelines and proactively identify data quality issues or anomalies. Contribute to the development of data quality frameworks and best practices. Participate in code reviews and provide feedback on data quality and testability. Strong SQL skills and experience with large-scale data sets. Proficiency in Python or another scripting language for test automation. Experience with data testing tools Familiarity with cloud platforms and data warehousing solutions

Posted 1 month ago

Apply

5.0 - 10.0 years

7 - 17 Lacs

Pune

Hybrid

Senior Data engineer: At Acxiom, our vision is to transform data into value for everyone. Our data products and analytical services enable marketers to recognize, better understand, and then deliver highly applicable messages to consumers across any available channel. Our solutions enable true people-based marketing with identity resolution and rich descriptive and predictive audience segmentation. We are seeking an experienced Data Engineer with a versatile skill set to undertake data engineering efforts to build the next-generation ML infrastructure for Acxioms business. As part of the Data Science and Analytics Team, the Sr. Data engineer will partner with Data Scientists and work hands-on with Big Data technologies and build a scalable infrastructure to support development of machine learning based Audience Propensity models and solutions for our domestic and global businesses. The Sr. Data engineer’s responsibilities include collaborating with internal and external stakeholders to identify data ingestion, processing, ETL, data warehousing requirements and develop appropriate solutions using modern data engineering tools in cloud. We want this person to help us build a scalable data lake and EDW using modern tech stack from the ground up. Success in this role comes from combining a strong data engineering background with product and business acumen to deliver scalable data pipeline and database solutions that can enable & support a high performant, large scale modeling infrastructure at Acxiom. The Sr. Data Engineer will be a champion of the latest Cloud database technologies & data engineering tools and will lead by example in influencing adoption and migration to the new stack. What you will do: Partner with ML Architects and data scientists to drive POCs to build a scalable, next generation model development, model management and governance infrastructure in Cloud Be a thought leader and champion for adoption of new cloud-based database technologies and enable migration to new cloud-based modeling stack Collaborate with other data scientists and team leads to define project requirements & build the next generation data source ingestion, ETL, data pipelining, data warehousing solutions in Cloud Build data-engineering solutions by developing strong understanding of business and product data needs Manage environment security permissions and enforce role based compliance Build expert knowledge of the various data sources brought together for audience propensities solutions – survey/panel data, 3rd-party data (demographics, psychographics, lifestyle segments), media content activity (TV, Digital, Mobile), and product purchase or transaction data and develop solutions for seamless ingestion and process of the data Resolve defects/bugs during QA testing, pre-production, production, and post-release patches Contribute to the design and architecture of services across the data landscape Participation in development of the integration team, contributing to reviews of methodologies, standards, and processes Contribute to comprehensive internal documentation of designs and service components Required Skills: Background in data pipelining, warehousing, ETL development solutions for data science and other Big Data applications Experience with distributed, columnar and/or analytic oriented databases or distributed data processing frameworks Minimum of 4 years of experience with Cloud databases –Snowflake, Azure SQL database, AWS Redshift, Google Cloud SQL or similar. Experience with NoSQL databases such as mongoDB, Cassandra or similar is nice to have Snowflake and/or Databricks certification preferred Minimum of 3 years of experience in developing data ingestion, data processing and analytical pipelines for big data, relational databases, data lake and data warehouse solutions Minimum of 3 years of hands-on experience in Big Data technologies such as Hadoop, Spark, PySpark, Spark/SparkSQL, Hive, Pig, Oozie and streaming technologies such as Kafka, Spark Streaming Ingestion API, Unix shell/Perl scripting etc. Strong programming skills using Java, Python, PySpark, Scala or similar Experience with public cloud architectures, pros/cons, and migration considerations Experience with container-based application deployment frameworks (Kubernetes, Docker, ECS/EKS or similar) Experience with Data Visualization tools such as Tableau, Looker or similar Outstanding troubleshooting, attention to detail, and communication skills (verbal/written) in a fast paced setting Bachelor's Degree in Computer Science or relevant discipline or 7+ years of relevant work experience. Solid communication skills: Demonstrate ability to explain complex technical issues to technical and non- technical audiences. Strong understanding of the software design/architecture process Experience with unit testing and data quality checks Building Infrastructure-as-code for Public Cloud using Terraform Experience in a Dev Ops engineering or equivalent role. Experience developing, enhancing, and maintaining CI/CD automation and configuration management using tools such as Jenkins, Snyk, and GitHub What will set you apart: Preferred Skills: Ability to work in white space and be able to develop solutions independently. Experience building ETL pipelines with health claims data will be a plus Prior experience with Cloud based ETL tools such as AWS Glue, AWS Data pipeline or similar Experience with building real-time and streaming data pipelines a plus Experience with MLOps tools such as Apache MLFlow/KubeFlow is a plus Exposure to E2E ML platform such as AWS Sagemaker, Azure ML studio, Google AI/ML, Datarobot, Databricks or similar a plus Experience with ingestion, processing and management of 3rd party data

Posted 1 month ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Mumbai, Hyderabad, Chennai

Work from Office

Your Role Should have extensively worked on Metadata, Rules & Memberlists in HFM. VB Scripting knowledge is mandatory. Understand and communicate the consequences of changes made. Should have worked on Monthly/Quarterly/Yearly Validations. Should have worked on ICP accounts, Journals and Intercompany Reports. Should have worked on Data Forms & Data Grids. Should able to work on FDMEE Mappings. Should be fluent with FDMEE Knowledge. Should have worked on Financial Reporting Studio. Your profile Performing UAT with business on the CR's. Should have a to resolve business about their HFM queries(if any). Agile process knowledge will be an added advantage What youll love about working here You can shape yourwith us. We offer a range of career paths and internal opportunities within Capgemini group. You will also get personalized career guidance from our leaders. You will get comprehensive wellness benefits including health checks, telemedicine, insurance with top-ups, elder care, partner coverage or new parent support via flexible work. You will have theon one of the industry's largest digital learning platforms, with access to 250,000+ courses and numerous certifications. Were committed to ensure that people of all backgrounds feel encouraged and have a sense of belonging at Capgemini. You are valued for who you are, and you can. Every Monday, kick off the week with a musical performance by our in-house band - The Rubber Band. Also get to participate in internal, yoga challenges, or marathons. At Capgemini, you can work onin tech and engineering with industry leaders or createto overcome societal and environmental challenges. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, generative AI, cloud and data, combined with its deep industry expertise and partner ecosystem. Location - Hyderabad,Chennai,Mumbai,Bengaluru

Posted 1 month ago

Apply

7.0 - 10.0 years

9 - 12 Lacs

Bengaluru

Work from Office

Requirement : Immediate or Max 15 days Job Description : Big Data Developer (Hadoop/Spark/Kafka) - This role is ideal for an experienced Big Data developer who is confident in taking complete ownership of the software development life cycle - from requirement gathering to final deployment. - The candidate will be responsible for engaging with stakeholders to understand the use cases, translating them into functional and technical specifications (FSD & TSD), and implementing scalable, efficient big data solutions. - A key part of this role involves working across multiple projects, coordinating with QA/support engineers for test case preparation, and ensuring deliverables meet high-quality standards. - Strong analytical skills are necessary for writing and validating SQL queries, along with developing optimized code for data processing workflows. - The ideal candidate should also be capable of writing unit tests and maintaining documentation to ensure code quality and maintainability. - The role requires hands-on experience with the Hadoop ecosystem, particularly Spark (including Spark Streaming), Hive, Kafka, and Shell scripting. - Experience with workflow schedulers like Airflow is a plus, and working knowledge of cloud platforms (AWS, Azure, GCP) is beneficial. - Familiarity with Agile methodologies will help in collaborating effectively in a fast-paced team environment. - Job scheduling and automation via shell scripts, and the ability to optimize performance and resource usage in a distributed system, are critical. - Prior experience in performance tuning and writing production-grade code will be valued. - The candidate must demonstrate strong communication skills to effectively coordinate with business users, developers, and testers, and to manage dependencies across teams. Key Skills Required : Must Have : - Hadoop, Spark (core & streaming), Hive, Kafka, Shell Scripting, SQL, TSD/FSD documentation. Good to Have : - Airflow, Scala, Cloud (AWS/Azure/GCP), Agile methodology. This role is both technically challenging and rewarding, offering the opportunity to work on large-scale, real-time data processing systems in a dynamic, agile environment.

Posted 1 month ago

Apply

4.0 - 8.0 years

12 - 22 Lacs

Pune

Work from Office

Key Responsibilities Oversight & Optimisation of Data Lakehouse & Architecture, Data Engineering & Pipelines Understand lakehouse architectures that unify structured and semi-structured data at scale Strong experience of Implementing, monitoring job scheduling and orchestration using Airflow , Azure Data Factory , and CI/CD triggers and with Azure Dataflows , Databricks , and Delta Lake for real-time/batch processing, Managing schema evolution , data versioning (e.g., Delta Lake), and pipeline adaptability Pipeline performance tuning for latency, resource usage, and throughput optimization Cloud Infrastructure & Automation Infra automation using Terraform, Azure Bicep, and AWS CDK Setting up scalable cloud storage (Data Lake Gen2, S3, Blob, RDS, etc.) Administering RBAC , secure key vault access, and compliance-driven access controls Tuning infrastructure and services for cost efficiency and compute optimization Full-Stack Cloud Data Platform Design Designing end-to-end Azure/AWS data platforms including ingestion, transformation, storage, and serving layers Interfacing with BI/AI teams to ensure data readiness, semantic modeling, and ML enablement Familiarity with metadata management, lineage tracking, and data catalog integration Enterprise Readiness & Delivery Experience working with MNCs and large enterprises with strict processes, approvals, and data governance Capable of evaluating alternative tools/services across clouds for architecture flexibility and cost-performance balance Hands-on with CI/CD , monitoring , and security best practices in regulated environments (BFSI, Pharma, Manufacturing) Lead cost-performance optimization across Azure and hybrid cloud environments Design modular, scalable infrastructure using Terraform / CDK / Bicep with a DevSecOps mindset Explore alternative cloud tools/services across compute, storage, identity, and monitoring to propose optimal solutions Drive RBAC, approval workflows, and governance controls in line with typical enterprise, MNC deployment security protocols Support BI/data teams with infra tuning , pipeline stability , and client demo readiness Collaborate with client-side architects, procurement, and finance teams for approvals and architectural alignment Ideal Profile 47 years of experience in cloud infrastructure and platform engineering Strong hold on Microsoft Azure , with hands-on exposure to AWS / GCP / Snowflake acceptable Skilled in IaC tools (Terraform, CDK), CI/CD , monitoring (Grafana, Prometheus), and cost optimization tools Comfortable proposing innovative, multi-vendor architectures that balance cost, performance, and compliance Prior experience working with large global clients or regulated environments (e.g., BFSI, Pharma, Manufacturing) Preferred Certifications Microsoft Azure Administrator / Architect (Associate/Expert) AWS Solutions Architect / FinOps Certified Bonus: Snowflake, DevOps Professional, or Data Platform certifications

Posted 1 month ago

Apply

10.0 - 12.0 years

12 - 14 Lacs

Hyderabad

Work from Office

Proven expert at writing SQL code with at least 10 years of experience. Must have 5+ years of experience working with large data with transactions in the order of 5 10M records. 5+ years of experience modeling loosely coupled relational databases that can store tera or petabytes of data. 3+ years of proven expertise in working with large Data Warehouses. Expert at ETL transformations using SSIS.

Posted 1 month ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Mumbai, New Delhi, Bengaluru

Work from Office

We are seeking a skilled Big Data Developer with 3+ years of experience to develop, maintain, and optimize large-scale data pipelines using frameworks like Spark, PySpark, and Airflow. The role involves working with SQL, Impala, Hive, and PL/SQL for advanced data transformations and analytics, designing scalable data storage systems, and integrating structured and unstructured data using tools like Sqoop. The ideal candidate will collaborate with cross-functional teams to implement data warehousing strategies and leverage BI tools for insights. Proficiency in Python programming, workflow orchestration with Airflow, and Unix/Linux environments is essential. Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote

Posted 1 month ago

Apply

1.0 - 2.0 years

3 - 6 Lacs

Dhule

Work from Office

Google Cloud Platform o GCS, DataProc, Big Query, Data Flow Programming Languages o Java, Scripting Languages like Python, Shell Script, SQL Google Cloud Platform o GCS, DataProc, Big Query, Data Flow 5+ years of experience in IT application delivery with proven experience in agile development methodologies 1 to 2 years of experience in Google Cloud Platform (GCS, DataProc, Big Query, Composer, Data Processing like Data Flow)

Posted 1 month ago

Apply

4.0 - 5.0 years

6 - 7 Lacs

Pune

Work from Office

Key Responsibilities : - Conduct feature engineering, data analysis, and data exploration to extract valuable insights. - Develop and optimize Machine Learning models to achieve high accuracy and performance. - Design and implement Deep Learning models, including Artificial Neural Networks (ANN), Convolutional Neural Networks (CNN), and Reinforcement Learning techniques. - Handle real-time imbalanced datasets and apply appropriate techniques to improve model fairness and robustness. - Deploy models in production environments and ensure continuous monitoring, improvement, and updates based on feedback. - Collaborate with cross-functional teams to align ML solutions with business goals. - Utilize fundamental statistical knowledge and mathematical principles to ensure the reliability of models. - Bring in the latest advancements in ML and AI to drive innovation. Requirements : - 4-5 years of hands-on experience in Machine Learning and Deep Learning. - Strong expertise in feature engineering, data exploration, and data preprocessing. - Experience with imbalanced datasets and techniques to improve model generalization. - Proficiency in Python, TensorFlow, Scikit-learn, and other ML frameworks. - Strong mathematical and statistical knowledge with problem-solving skills. - Ability to optimize models for high accuracy and performance in real-world scenarios. Preferred Qualifications : - Experience with Big Data technologies (Hadoop, Spark, etc.) - Familiarity with containerization and orchestration tools (Docker, Kubernetes). - Experience in automating ML pipelines with MLOps practices. - Experience in model deployment using cloud platforms (AWS, GCP, Azure) or MLOps tools.

Posted 1 month ago

Apply

4.0 - 5.0 years

6 - 7 Lacs

Chennai

Work from Office

Responsibilities - Conduct feature engineering, data analysis, and data exploration to extract valuable insights. - Develop and optimize Machine Learning models to achieve high accuracy and performance. - Design and implement Deep Learning models, including Artificial Neural Networks (ANN), Convolutional Neural Networks (CNN), and Reinforcement Learning techniques. - Handle real-time imbalanced datasets and apply appropriate techniques to improve model fairness and robustness. - Deploy models in production environments and ensure continuous monitoring, improvement, and updates based on feedback. - Collaborate with cross-functional teams to align ML solutions with business goals. - Utilize fundamental statistical knowledge and mathematical principles to ensure the reliability of models. - Bring in the latest advancements in ML and AI to drive innovation. Required Skills - 4-5 years of hands-on experience in Machine Learning and Deep Learning. - Strong expertise in feature engineering, data exploration, and data preprocessing. - Experience with imbalanced datasets and techniques to improve model generalization. - Proficiency in Python, TensorFlow, Scikit-learn, and other ML frameworks. - Strong mathematical and statistical knowledge with problem-solving skills. - Ability to optimize models for high accuracy and performance in real-world scenarios. Preferred Skills - Experience with Big Data technologies (Hadoop, Spark, etc.) - Familiarity with containerization and orchestration tools (Docker, Kubernetes). - Experience in automating ML pipelines with MLOps practices. - Experience in model deployment using cloud platforms (AWS, GCP, Azure) or MLOps tools.

Posted 1 month ago

Apply

10.0 - 15.0 years

40 - 60 Lacs

Gurugram

Work from Office

About the Role: We are seeking an experienced Data Engineering Architect to join our team. In this role, you will be responsible for designing comprehensive data architecture and lead the design and development of scalable, modern data platforms that power business-critical data products and insights. You will work closely with customers and technology partners to deliver data solutions that address complex telecommunications business requirements including customer experience management, operational analytics, digital transformation initiatives and AI/ML enablement. The ideal candidate will bring deep technical expertise across cloud data engineering, data lakehouse architecture, and modular, reusable data components and have strong hands-on experience with Azure, Databricks, Delta Lake, and modern data engineering tools and frameworks. Responsibilities: Design and implement robust, scalable, and cost-effective data architectures at enterprise level using the Azure Databricks Lakehouse platform. Architect modern Delta Lakehouse platforms to support structured and semi-structured data ingestion, processing, and analytics. Implement robust data integration and orchestration pipelines using platforms Kafka, ADF, Airflow, Event hubs etc. Experience in building Customer 360 platforms (CDP,CIH etc ) Create data architectures that support business-specific use cases including customer journey analytics, CLTV, churn score, market segment and enable reverse ETL. Collaborate with domain owners (CRM, billing ,usage) to define data contracts and model domain specific datasets. Lead the definition of data modeling standards (dimensional, normalized, data vault) and best practices. Establish and enforce data governance, security, and privacy controls aligned with regulatory compliance requirements like GDPR, PII etc. Collaborate with data engineers, product teams, business stakeholders and clients to translate business needs into scalable data solutions. Evaluate and recommend appropriate big data technologies, cloud platforms, and processing frameworks based on business-specific requirements and regulatory compliance needs. Stay current with the latest advancements in data technologies including cloud services, data processing frameworks, and AI/ML capabilities Contribute to the development of best practices, reference architectures, and reusable solution components for accelerating proposal development Qualifications: Bachelor's or Master's degree in Computer Science, Data Science, or a related technical field 10+ years of experience in data engineering, BI architecture with at least 5+ years in architectural or solution architecture Strong experience with: • Azure (Data Lake, Data Factory, Synapse, Key Vault, Azure AI services) • Databricks, Delta Lake, Delta live tables • PySpark,Scala,SQL, Python, Kafka Unity Catalog, Alation Familiarity with modern data frameworks and tools: • Apache Kafka, Airflow, Flink, NiFi, dbt, Iceberg Deep understanding of data lakehouse concepts, data modeling, and pipeline orchestration and performance optimization. Proven ability to design reusable, domain-driven data products in a large-scale enterprise environment Experience in data governance, metadata management, data cataloging, data quality, lineage and compliance. Exposure to data mesh, domain ownership models, and data product thinking. Understanding of DevOps, CI/CD in data environments. Knowledge of system monitoring and observability tools Prometheus and Grafana Experience designing and implementing data lakes, data warehouses, and machine learning pipelines for business use cases Excellent communication and presentation skills with ability to explain complex technical concepts to business stakeholders Good to have TMforum certifications or telecommunications industry certifications Relevant data platform certifications such as Databricks, Azure Data Engineer are a plus Willingness to travel as required

Posted 1 month ago

Apply

4.0 - 5.0 years

6 - 8 Lacs

Patna

Work from Office

Responsibilities - Conduct feature engineering, data analysis, and data exploration to extract valuable insights.- Develop and optimize Machine Learning models to achieve high accuracy and performance.- Design and implement Deep Learning models, including Artificial Neural Networks (ANN), Convolutional Neural Networks (CNN), and Reinforcement Learning techniques. - Handle real-time imbalanced datasets and apply appropriate techniques to improve model fairness and robustness.- Deploy models in production environments and ensure continuous monitoring, improvement, and updates based on feedback.- Collaborate with cross-functional teams to align ML solutions with business goals.- Utilize fundamental statistical knowledge and mathematical principles to ensure the reliability of models.- Bring in the latest advancements in ML and AI to drive innovation. Required Skills - 4-5 years of hands-on experience in Machine Learning and Deep Learning.- Strong expertise in feature engineering, data exploration, and data preprocessing.- Experience with imbalanced datasets and techniques to improve model generalization.- Proficiency in Python, TensorFlow, Scikit-learn, and other ML frameworks.- Strong mathematical and statistical knowledge with problem-solving skills.- Ability to optimize models for high accuracy and performance in real-world scenarios. Preferred Skills - Experience with Big Data technologies (Hadoop, Spark, etc.)- Familiarity with containerization and orchestration tools (Docker, Kubernetes).- Experience in automating ML pipelines with MLOps practices.- Experience in model deployment using cloud platforms (AWS, GCP, Azure) or MLOps tools.

Posted 1 month ago

Apply

4.0 - 5.0 years

6 - 8 Lacs

Surat

Work from Office

Responsibilities - Conduct feature engineering, data analysis, and data exploration to extract valuable insights.- Develop and optimize Machine Learning models to achieve high accuracy and performance.- Design and implement Deep Learning models, including Artificial Neural Networks (ANN), Convolutional Neural Networks (CNN), and Reinforcement Learning techniques. - Handle real-time imbalanced datasets and apply appropriate techniques to improve model fairness and robustness.- Deploy models in production environments and ensure continuous monitoring, improvement, and updates based on feedback.- Collaborate with cross-functional teams to align ML solutions with business goals.- Utilize fundamental statistical knowledge and mathematical principles to ensure the reliability of models.- Bring in the latest advancements in ML and AI to drive innovation. Required Skills - 4-5 years of hands-on experience in Machine Learning and Deep Learning.- Strong expertise in feature engineering, data exploration, and data preprocessing.- Experience with imbalanced datasets and techniques to improve model generalization.- Proficiency in Python, TensorFlow, Scikit-learn, and other ML frameworks.- Strong mathematical and statistical knowledge with problem-solving skills.- Ability to optimize models for high accuracy and performance in real-world scenarios. Preferred Skills - Experience with Big Data technologies (Hadoop, Spark, etc.)- Familiarity with containerization and orchestration tools (Docker, Kubernetes).- Experience in automating ML pipelines with MLOps practices.- Experience in model deployment using cloud platforms (AWS, GCP, Azure) or MLOps tools.

Posted 1 month ago

Apply

4.0 - 5.0 years

6 - 8 Lacs

Coimbatore

Work from Office

Key Responsibilities : - Conduct feature engineering, data analysis, and data exploration to extract valuable insights. - Develop and optimize Machine Learning models to achieve high accuracy and performance. - Design and implement Deep Learning models, including Artificial Neural Networks (ANN), Convolutional Neural Networks (CNN), and Reinforcement Learning techniques. - Handle real-time imbalanced datasets and apply appropriate techniques to improve model fairness and robustness. - Deploy models in production environments and ensure continuous monitoring, improvement, and updates based on feedback. - Collaborate with cross-functional teams to align ML solutions with business goals. - Utilize fundamental statistical knowledge and mathematical principles to ensure the reliability of models. - Bring in the latest advancements in ML and AI to drive innovation. Requirements : - 4-5 years of hands-on experience in Machine Learning and Deep Learning. - Strong expertise in feature engineering, data exploration, and data preprocessing. - Experience with imbalanced datasets and techniques to improve model generalization. - Proficiency in Python, TensorFlow, Scikit-learn, and other ML frameworks. - Strong mathematical and statistical knowledge with problem-solving skills. - Ability to optimize models for high accuracy and performance in real-world scenarios. Preferred Qualifications : - Experience with Big Data technologies (Hadoop, Spark, etc.) - Familiarity with containerization and orchestration tools (Docker, Kubernetes). - Experience in automating ML pipelines with MLOps practices. - Experience in model deployment using cloud platforms (AWS, GCP, Azure) or MLOps tools.

Posted 1 month ago

Apply

4.0 - 5.0 years

6 - 7 Lacs

Ahmedabad

Work from Office

Responsibilities - Conduct feature engineering, data analysis, and data exploration to extract valuable insights. - Develop and optimize Machine Learning models to achieve high accuracy and performance. - Design and implement Deep Learning models, including Artificial Neural Networks (ANN), Convolutional Neural Networks (CNN), and Reinforcement Learning techniques. - Handle real-time imbalanced datasets and apply appropriate techniques to improve model fairness and robustness. - Deploy models in production environments and ensure continuous monitoring, improvement, and updates based on feedback. - Collaborate with cross-functional teams to align ML solutions with business goals. - Utilize fundamental statistical knowledge and mathematical principles to ensure the reliability of models. - Bring in the latest advancements in ML and AI to drive innovation. Required Skills - 4-5 years of hands-on experience in Machine Learning and Deep Learning. - Strong expertise in feature engineering, data exploration, and data preprocessing. - Experience with imbalanced datasets and techniques to improve model generalization. - Proficiency in Python, TensorFlow, Scikit-learn, and other ML frameworks. - Strong mathematical and statistical knowledge with problem-solving skills. - Ability to optimize models for high accuracy and performance in real-world scenarios. Preferred Skills - Experience with Big Data technologies (Hadoop, Spark, etc.) - Familiarity with containerization and orchestration tools (Docker, Kubernetes). - Experience in automating ML pipelines with MLOps practices. - Experience in model deployment using cloud platforms (AWS, GCP, Azure) or MLOps tools.

Posted 1 month ago

Apply

4.0 - 5.0 years

6 - 7 Lacs

Chandigarh

Work from Office

Key Responsibilities : - Conduct feature engineering, data analysis, and data exploration to extract valuable insights. - Develop and optimize Machine Learning models to achieve high accuracy and performance. - Design and implement Deep Learning models, including Artificial Neural Networks (ANN), Convolutional Neural Networks (CNN), and Reinforcement Learning techniques. - Handle real-time imbalanced datasets and apply appropriate techniques to improve model fairness and robustness. - Deploy models in production environments and ensure continuous monitoring, improvement, and updates based on feedback. - Collaborate with cross-functional teams to align ML solutions with business goals. - Utilize fundamental statistical knowledge and mathematical principles to ensure the reliability of models. - Bring in the latest advancements in ML and AI to drive innovation. Requirements : - 4-5 years of hands-on experience in Machine Learning and Deep Learning. - Strong expertise in feature engineering, data exploration, and data preprocessing. - Experience with imbalanced datasets and techniques to improve model generalization. - Proficiency in Python, TensorFlow, Scikit-learn, and other ML frameworks. - Strong mathematical and statistical knowledge with problem-solving skills. - Ability to optimize models for high accuracy and performance in real-world scenarios. Preferred Qualifications : - Experience with Big Data technologies (Hadoop, Spark, etc.) - Familiarity with containerization and orchestration tools (Docker, Kubernetes). - Experience in automating ML pipelines with MLOps practices. - Experience in model deployment using cloud platforms (AWS, GCP, Azure) or MLOps tools.

Posted 1 month ago

Apply

4.0 - 5.0 years

7 - 9 Lacs

Mumbai

Work from Office

Responsibilities - Conduct feature engineering, data analysis, and data exploration to extract valuable insights. - Develop and optimize Machine Learning models to achieve high accuracy and performance. - Design and implement Deep Learning models, including Artificial Neural Networks (ANN), Convolutional Neural Networks (CNN), and Reinforcement Learning techniques. - Handle real-time imbalanced datasets and apply appropriate techniques to improve model fairness and robustness. - Deploy models in production environments and ensure continuous monitoring, improvement, and updates based on feedback. - Collaborate with cross-functional teams to align ML solutions with business goals. - Utilize fundamental statistical knowledge and mathematical principles to ensure the reliability of models. - Bring in the latest advancements in ML and AI to drive innovation. Required Skills - 4-5 years of hands-on experience in Machine Learning and Deep Learning. - Strong expertise in feature engineering, data exploration, and data preprocessing. - Experience with imbalanced datasets and techniques to improve model generalization. - Proficiency in Python, TensorFlow, Scikit-learn, and other ML frameworks. - Strong mathematical and statistical knowledge with problem-solving skills. - Ability to optimize models for high accuracy and performance in real-world scenarios. Preferred Skills - Experience with Big Data technologies (Hadoop, Spark, etc.) - Familiarity with containerization and orchestration tools (Docker, Kubernetes). - Experience in automating ML pipelines with MLOps practices. - Experience in model deployment using cloud platforms (AWS, GCP, Azure) or MLOps tools.

Posted 1 month ago

Apply

4.0 - 5.0 years

6 - 7 Lacs

Jaipur

Work from Office

Key Responsibilities : - Conduct feature engineering, data analysis, and data exploration to extract valuable insights. - Develop and optimize Machine Learning models to achieve high accuracy and performance. - Design and implement Deep Learning models, including Artificial Neural Networks (ANN), Convolutional Neural Networks (CNN), and Reinforcement Learning techniques. - Handle real-time imbalanced datasets and apply appropriate techniques to improve model fairness and robustness. - Deploy models in production environments and ensure continuous monitoring, improvement, and updates based on feedback. - Collaborate with cross-functional teams to align ML solutions with business goals. - Utilize fundamental statistical knowledge and mathematical principles to ensure the reliability of models. - Bring in the latest advancements in ML and AI to drive innovation. Requirements : - 4-5 years of hands-on experience in Machine Learning and Deep Learning. - Strong expertise in feature engineering, data exploration, and data preprocessing. - Experience with imbalanced datasets and techniques to improve model generalization. - Proficiency in Python, TensorFlow, Scikit-learn, and other ML frameworks. - Strong mathematical and statistical knowledge with problem-solving skills. - Ability to optimize models for high accuracy and performance in real-world scenarios. Preferred Qualifications : - Experience with Big Data technologies (Hadoop, Spark, etc.) - Familiarity with containerization and orchestration tools (Docker, Kubernetes). - Experience in automating ML pipelines with MLOps practices. - Experience in model deployment using cloud platforms (AWS, GCP, Azure) or MLOps tools.

Posted 1 month ago

Apply

4.0 - 5.0 years

6 - 8 Lacs

Chandigarh

Work from Office

Responsibilities - Conduct feature engineering, data analysis, and data exploration to extract valuable insights. - Develop and optimize Machine Learning models to achieve high accuracy and performance. - Design and implement Deep Learning models, including Artificial Neural Networks (ANN), Convolutional Neural Networks (CNN), and Reinforcement Learning techniques. - Handle real-time imbalanced datasets and apply appropriate techniques to improve model fairness and robustness. - Deploy models in production environments and ensure continuous monitoring, improvement, and updates based on feedback. - Collaborate with cross-functional teams to align ML solutions with business goals. - Utilize fundamental statistical knowledge and mathematical principles to ensure the reliability of models. - Bring in the latest advancements in ML and AI to drive innovation. Required Skills - 4-5 years of hands-on experience in Machine Learning and Deep Learning. - Strong expertise in feature engineering, data exploration, and data preprocessing. - Experience with imbalanced datasets and techniques to improve model generalization. - Proficiency in Python, TensorFlow, Scikit-learn, and other ML frameworks. - Strong mathematical and statistical knowledge with problem-solving skills. - Ability to optimize models for high accuracy and performance in real-world scenarios. Preferred Skills - Experience with Big Data technologies (Hadoop, Spark, etc.) - Familiarity with containerization and orchestration tools (Docker, Kubernetes). - Experience in automating ML pipelines with MLOps practices. - Experience in model deployment using cloud platforms (AWS, GCP, Azure) or MLOps tools.

Posted 1 month ago

Apply

7.0 - 12.0 years

9 - 14 Lacs

Bengaluru

Work from Office

Location: Bangalore/Hyderabad/Pune Experience level: 7+ Years About the Role We are seeking a highly skilled Snowflake Developer to join our team in Bangalore. The ideal candidate will have extensive experience in designing, implementing, and managing Snowflake-based data solutions. This role involves developing data architectures and ensuring the effective use of Snowflake to drive business insights and innovation. Key Responsibilities: Design and implement scalable, efficient, and secure Snowflake solutions to meet business requirements. Develop data architecture frameworks, standards, and principles, including modeling, metadata, security, and reference data. Implement Snowflake-based data warehouses, data lakes, and data integration solutions. Manage data ingestion, transformation, and loading processes to ensure data quality and performance. Collaborate with business stakeholders and IT teams to develop data strategies and ensure alignment with business goals. Drive continuous improvement by leveraging the latest Snowflake features and industry trends. Qualifications: Bachelors or Masters degree in Computer Science, Information Technology, Data Science, or a related field. 8+ years of experience in data architecture, data engineering, or a related field. Extensive experience with Snowflake, including designing and implementing Snowflake-based solutions. Must have exposure working in Airflow Proven track record of contributing to data projects and working in complex environments. Familiarity with cloud platforms (e.g., AWS, GCP) and their data services. Snowflake certification (e.g., SnowPro Core, SnowPro Advanced) is a plus.

Posted 1 month ago

Apply

4.0 - 5.0 years

6 - 8 Lacs

Jaipur

Work from Office

Responsibilities - Conduct feature engineering, data analysis, and data exploration to extract valuable insights. - Develop and optimize Machine Learning models to achieve high accuracy and performance. - Design and implement Deep Learning models, including Artificial Neural Networks (ANN), Convolutional Neural Networks (CNN), and Reinforcement Learning techniques. - Handle real-time imbalanced datasets and apply appropriate techniques to improve model fairness and robustness. - Deploy models in production environments and ensure continuous monitoring, improvement, and updates based on feedback. - Collaborate with cross-functional teams to align ML solutions with business goals. - Utilize fundamental statistical knowledge and mathematical principles to ensure the reliability of models. - Bring in the latest advancements in ML and AI to drive innovation. Required Skills - 4-5 years of hands-on experience in Machine Learning and Deep Learning. - Strong expertise in feature engineering, data exploration, and data preprocessing. - Experience with imbalanced datasets and techniques to improve model generalization. - Proficiency in Python, TensorFlow, Scikit-learn, and other ML frameworks. - Strong mathematical and statistical knowledge with problem-solving skills. - Ability to optimize models for high accuracy and performance in real-world scenarios. Preferred Skills - Experience with Big Data technologies (Hadoop, Spark, etc.) - Familiarity with containerization and orchestration tools (Docker, Kubernetes). - Experience in automating ML pipelines with MLOps practices. - Experience in model deployment using cloud platforms (AWS, GCP, Azure) or MLOps tools.

Posted 1 month ago

Apply

8.0 - 10.0 years

10 - 12 Lacs

Bengaluru

Work from Office

Location: Bangalore/Hyderabad/Pune Experience level: 8+ Years About the Role We are looking for a technical and hands-on Lead Data Engineer to help drive the modernization of our data transformation workflows. We currently rely on legacy SQL scripts orchestrated via Airflow, and we are transitioning to a modular, scalable, CI/CD-driven DBT-based data platform. The ideal candidate has deep experience with DBT , modern data stack design , and has previously led similar migrations improving code quality, lineage visibility, performance, and engineering best practices. Key Responsibilities Lead the migration of legacy SQL-based ETL logic to DBT-based transformations Design and implement a scalable, modular DBT architecture (models, macros, packages) Audit and refactor legacy SQL for clarity, efficiency, and modularity Improve CI/CD pipelines for DBT: automated testing, deployment, and code quality enforcement Collaborate with data analysts, platform engineers, and business stakeholders to understand current gaps and define future data pipelines Own Airflow orchestration redesign where needed (e.g., DBT Cloud/API hooks or airflow-dbt integration) Define and enforce coding standards, review processes, and documentation practices Coach junior data engineers on DBT and SQL best practices Provide lineage and impact analysis improvements using DBTs built-in tools and metadata Must-Have Qualifications 8+ years of experience in data engineering Proven success in migrating legacy SQL to DBT , with visible results Deep understanding of DBT best practices , including model layering, Jinja templating, testing, and packages Proficient in SQL performance tuning , modular SQL design, and query optimization Experience with Airflow (Composer, MWAA), including DAG refactoring and task orchestration Hands-on experience with modern data stacks (e.g., Snowflake, BigQuery etc.) Familiarity with data testing and CI/CD for analytics workflows Strong communication and leadership skills; comfortable working cross-functionally Nice-to-Have Experience with DBT Cloud or DBT Core integrations with Airflow Familiarity with data governance and lineage tools (e.g., dbt docs, Alation) Exposure to Python (for custom Airflow operators/macros or utilities) Previous experience mentoring teams through modern data stack transitions

Posted 1 month ago

Apply

4.0 - 6.0 years

6 - 8 Lacs

Bengaluru

Work from Office

Role: Snowflake Developer with DBT Location: Bangalore/Hyderabad/Pune About the Role : We are seeking a Snowflake Developer with a deep understanding of DBT (data build tool) to help us design, build, and maintain scalable data pipelines. The ideal candidate will have hands-on experience working with Snowflake, DBT, and a passion for optimizing data processes for performance and efficiency. Responsibilities : Design, develop, and optimize Snowflake data models and DBT transformations. Build and maintain CI/CD pipelines for automated DBT workflows. Implement best practices for data pipeline performance, scalability, and efficiency in Snowflake. Contribute to the DBT community or develop internal tools/plugins to enhance the workflow. Troubleshoot and resolve complex data pipeline issues using DBT and Snowflake Qualifications : Must have minimum 4+ years of experience with Snowflake Must have at least 1 year of experience with DBT Extensive experience with DBT, including setting up CI/CD pipelines, optimizing performance, and contributing to the DBT community or plugins. Must be strong in SQL, data modelling, and ELT pipelines. Excellent problem-solving skills and the ability to collaborate effectively in a team environment.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies