Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
13.0 - 18.0 years
14 - 18 Lacs
Gurugram
Work from Office
0px> Who are we? In one sentence We are seeking a Java Full Stack Architect & People Manager with strong technical depth and leadership capabilities to lead our Java Modernization projects. The ideal candidate will possess a robust understanding of Java Full Stack, Databases and Cloud-based solution delivery , combined with proven experience in managing high-performing technical teams. This role requires a visionary who can translate business challenges into scalable distributed solutions while nurturing talent and fostering innovation. What will your job look like? Lead the design and implementation of Java Full Stack solutions covering Frontend, Backend and Batch processes & interface Integrations across business use cases. Translate business requirements into technical architectures using Azure/AWS Cloud Platforms . Manage and mentor a multidisciplinary team of engineers, Leads and Specialists . Drive adoption of Databricks , Python In addition to Java-based frameworks within solution development. Collaborate closely with product owners, data engineering teams, and customer IT & business stakeholders. Ensure high standards in code quality, system performance, and model governance . Track industry trends and continuously improve the technology stack adopting newer trends showcasing productization, automation and innovative ideas. Oversee end-to-end lifecycle: use case identification, PoC, MVP, production deployment, and support. Define and monitor KPIs to measure team performance and project impact. All you need is... 13+ years of overall IT experience with a strong background in Telecom domain (preferred). Proven hands-on experience with Java Full Stack technologies and Cloud DBs Strong understanding of Design Principles and patterns for distributed applications OnPrem as well as OnCloud . Demonstrated experience in building and deploying on Azure or AWS via CI/CD practices . Strong expertise in Java, Databases, Python, Kafka and Linux Scripting . In-depth understanding of cloud-native architecture , microservices , and data pipelines . Solid people management experience: team building, mentoring, performance reviews. Strong analytical thinking and communication skills. Ability to be Hands-On with Coding, Reviews while Development and Production Support Good to Have Skills: Familiarity with Databricks, PySpark Familiarity of Snowflake Why you will love this job: You will be challenged with leading and mentoring a few development teams & projects You will join a strong team with lots of activities, technologies, business challenges and a progression path You will have the opportunity to work with the industry most advanced technologies
Posted 1 month ago
3.0 - 8.0 years
5 - 11 Lacs
Pune, Mumbai (All Areas)
Hybrid
Overview: TresVista is looking to hire an Associate in its Data Intelligence Group team, who will be primarily responsible for managing clients as well as monitor/execute projects both for the clients as well as internal teams. The Associate may be directly managing a team of up to 3-4 Data Engineers & Analysts across multiple data engineering efforts for our clients with varied technologies. They would be joining the current team of 70+ members, which is a mix of Data Engineers, Data Visualization Experts, and Data Scientists. Roles and Responsibilities: Interacting with the client (internal or external) to understand their problems and work on solutions that address their needs Driving projects and working closely with a team of individuals to ensure proper requirements are identified, useful user stories are created, and work is planned logically and efficiently to deliver solutions that support changing business requirements Managing the various activities within the team, strategizing how to approach tasks, creating timelines and goals, distributing information/tasks to the various team members Conducting meetings, documenting, and communicating findings effectively to clients, management and cross-functional teams Creating Ad-hoc reports for multiple internal requests across departments Automating the process using data transformation tools Prerequisites Strong analytical, problem-solving, interpersonal, and communication skills Advanced knowledge of DBMS, Data Modelling along with advanced querying capabilities using SQL Working experience in cloud technologies (GCP/ AWS/Azure/Snowflake) Prior experience in building and deploying ETL/ELT pipelines using CI/CD, and orchestration tools such as Apache Airflow, GCP workflows, etc. Proficiency in Python for building ETL/ELT processes and data modeling Proficiency in Reporting and Dashboards creation using Power BI/Tableau Knowledge in building ML models and leveraging Gen AI for modern architectures. Experience working with version control platforms like GitHub Familiarity with IaC tools like Terraform and Ansible is good to have Stakeholder Management and client communication experience would be preferred Experience in the Financial Services domain will be an added plus Experience in Machine Learning tools and techniques will be good to have Experience 3-7 years Education BTech/MTech/BE/ME/MBA in Analytics Compensation The compensation structure will be as per industry standards
Posted 1 month ago
8.0 - 13.0 years
15 - 30 Lacs
Bengaluru
Work from Office
Role: Senior Data Engineer Location: Bangalore - Hybrid Experience : 10+ Years Job Requirements: ETL & Data Pipelines: Experience building and maintaining ETL pipelines with large data sets using AWS Glue, EMR, Kinesis, Kafka, CloudWatch Programming & Data Processing: Strong Python development experience with proficiency in Spark or PySpark Experience in using APIs Database Management: Strong skills in writing SQL queries and performance tuning in AWS Redshift Proficient with other industry-leading RDBMS such as MS SQL Server and PostgreSQL AWS Services: Proficient in working with AWS services including AWS Lambda, Event Bridge, Step Functions, SNS, SQS, S3, and MI models Interested candidates can share their resume at Neesha1@damcogroup.com
Posted 1 month ago
2.0 - 4.0 years
10 - 18 Lacs
Bengaluru
Work from Office
Role & responsibilities : Design and Build Data Infrastructure : Develop scalable data pipelines and data lake/warehouse solutions for real-time and batch data using cloud and open-source tools. Develop & Automate Data Workflows : Create Python-based ETL/ELT processes for data ingestion, validation, integration, and transformation across multiple sources. Ensure Data Quality & Governance : Implement monitoring systems, resolve data quality issues, and enforce data governance and security best practices. Collaborate & Mentor : Work with cross-functional teams to deliver data solutions, and mentor junior engineers as the team grows. Explore New Tech : Research and implement emerging tools and technologies to improve system performance and scalability.
Posted 1 month ago
8.0 - 13.0 years
35 - 40 Lacs
Bengaluru
Remote
Role & responsibilities We are looking for MLOps/ML Engineer with Dataiku DSS platform with MNC company for permanent position, Remote. Preferred candidate profile We are seeking a skilled MLOps/ML Engineer to serve as our subject matter expert for Dataiku DSS. In this pivotal role, you will manage and scale our end-to-end machine learning operations, all of which are built on the Dataiku platform. Key responsibilities include designing automated data pipelines, deploying models as production APIs, ensuring the reliability of scheduled jobs, and championing platform best practices. Extensive, proven experience with Dataiku is mandatory. Data Pipeline Development: Design and implement Extract, Transform, Load (ETL) processes to collect, process, and analyze data from diverse sources. Workflow Optimization: Develop, configure, and optimize Dataiku DSS workflows to streamline data processing and machine learning operations. Integration: Integrate Dataiku DSS with cloud platforms (e.g., AWS, Azure, Google Cloud Platform) and big data technologies such as Snowflake, Hadoop, and Spark. AI/ML Model development & Implementation: Implement and optimize machine learning models within Dataiku for predictive analytics and AI-driven solutions. MLOps & Data Ops: Deployment of data pipelines and AI/ML models within the Dataiku platform. Dataiku Platform Management: Build, Manage & Support Dataiku platform. Automation: Automate data workflows, monitor job performance, and ensure scalable execution. Customization: Develop and maintain custom Python/R scripts within Dataiku to enhance analytics capabilities. Dataiku Project Management: Develop and maintain custom Python/R scripts within Dataiku to enhance analytics capabilities. Required Skills and Qualifications: Experience Level: 2 to 6 years of hands-on experience with Dataiku DSS platform and data engineering. Educational Background: Bachelors or Master’s degree in Computer Science, Data Science, Information Technology, or a related field. Technical Proficiency: Experience with Dataiku DSS platform. Strong programming skills in Python and SQL. Familiarity with cloud services (AWS, Azure, GCP) and big data technologies (Hadoop, Spark). Analytical Skills: Ability to analyze complex data sets and provide actionable insights. Problem-Solving: Strong troubleshooting skills to address and resolve issues in data workflows and models. Communication: Effective verbal and written communication skills to collaborate with team members and stakeholders.
Posted 1 month ago
3.0 - 6.0 years
9 - 13 Lacs
Bengaluru
Work from Office
About the job : - As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. - You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. - This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What You'll Do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databricks platform. - Collaborate with architects and other teams to migrate current solution to use Databricks. - Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements. - Use best practices for data governance, security, and compliance on the Databricks platform. - Mentor junior engineers and provide technical guidance. - Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement. You'll Be Expected To Have : - Bachelor's or Master's degree in Computer Science, Engineering, or a related field. - 3 to 6 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform. - Proficiency in programming languages such as Python, Scala, or SQL. - Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark. - Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services. - Proven track record of delivering scalable and reliable data solutions in a fast-paced environment. - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration skills with the ability to work effectively in cross-functional teams. - Good to have experience with containerization technologies such as Docker and Kubernetes. - Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.
Posted 1 month ago
8.0 - 13.0 years
20 - 25 Lacs
Chennai, Bengaluru, Delhi / NCR
Work from Office
Role : Senior Databricks Engineer / Databricks Technical Lead/ Data Architect. Experience : 8-15 years. Location : Bangalore, Chennai, Delhi, Pune. Primary Roles And Responsibilities : - Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack. - Ability to provide solutions that are forward-thinking in data engineering and analytics space. - Collaborate with DW/BI leads to understand new ETL pipeline development requirements. - Triage issues to find gaps in existing pipelines and fix the issues. - Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs. - Help joiner team members to resolve issues and technical challenges. - Drive technical discussion with client architect and team members. - Orchestrate the data pipelines in scheduler via Airflow. Skills And Qualifications : - Bachelor's and/or masters degree in computer science or equivalent experience. - Must have total 6+ yrs of IT experience and 3+ years' experience in Data warehouse/ETL projects. - Deep understanding of Star and Snowflake dimensional modelling. - Strong knowledge of Data Management principles. - Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture. - Should have hands-on experience in SQL, Python and Spark (PySpark). - Candidate must have experience in AWS/ Azure stack. - Desirable to have ETL with batch and streaming (Kinesis). - Experience in building ETL / data warehouse transformation processes. - Experience with Apache Kafka for use with streaming data / event-based data. - Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala). - Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J). - Experience working with structured and unstructured data including imaging & geospatial data. - Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. - Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot. - Databricks Certified Data Engineer Associate/Professional Certification (Desirable). - Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects. - Should have experience working in Agile methodology. - Strong verbal and written communication skills. - Strong analytical and problem-solving skills with a high attention to detail. Location - Bangalore,Chennai,Pune,Delhi NCR
Posted 1 month ago
6.0 - 9.0 years
9 - 13 Lacs
Kolkata
Work from Office
Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.
Posted 1 month ago
2.0 - 7.0 years
20 - 25 Lacs
Bengaluru
Work from Office
Experience : 2+ years Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Hybrid (Bengaluru) Must have skills required: AWS, Go Lang, Python Requirements : We are looking for a Backend Engineer to help us through the next level of technology changes needed to revolutionize Healthcare for India. We are seeking individuals who can understand real-world scenarios and come up with scalable tech solutions for millions of patients to make healthcare accessible. The role comes with a good set of challenges to solve, and offers an opportunity to build new systems that will be rolled out at scale. You have 2 to 4 years or more of software development experience with expertise in designing and implementing high-performance web applications. Very strong understanding and experience with any of Java, Scala, GoLang, Python. Experience writing optimized queries in relational databases like Mysql, redshift / Postgres. You have exposure to basic data engineering concepts like data pipeline, hadoop or spark Write clean and testable code. You love to build platforms that enable other teams to build on top of. Some of challenges we solve include: Clinical decision support Early Detection: Digitally assist doctors in identifying high-risk patients for early intervention Track & Advice: Analyze patients vitals/test values across visits to assist doctors in personalizing chronic care. Risk Prevention: Assist doctors in monitoring the progression of chronic disease by drawing attention to additional symptoms and side effects. EMR (Electronic Medical Records): Clinical software to write prescriptions and manage clinical records AI-powered features Adapts to doctors practice: Learns from doctors prescribing preferences and provides relevant auto-fill recommendations for faster prescriptions. Longitudinal patient journey: AI analyses the longitudinal journey of patients to assist doctors in early detection. Medical language processing: AI-driven automatic digitization of printed prescriptions and test reports. Core platform Pharma advertising platform to doctors at the moment of truth Real world evidence to generate market insights for B2B consumption Virtual Store Online Pharmacy+ Diagnostic solutions helping patients with one-click order Technologies we use : Distributed Tech: Kafka, Elastic search Databases: MongoDB, RDS Cloud platform: AWS Languages: Go-lang, python, PHP UI Tech: React, react native Caching: Redis Big Data: AWS Athena, Redshift APM: NewRelic Responsibilities : Develop well testable and reusable services with structured, granular and well-commented code. Contribute in the area of API building, data pipeline setup, and new tech initiatives needed for a core platform Acclimate to new technologies and situations as per the company demands and requirements with the vision of providing best customer experience Meet expected deliverables and quality standards with every release Collaborate with teams to design, develop, test and refine deliverables that meet the objectives Perform code reviews and implement improvement plans Additional Responsibilities : Pitch-in during the phases of design and architectural solutions of Business problems. Organize, lead and motivate the development team to meet expected timelines and quality standards across releases. Actively contribute to development process improvement plans. Assist peers by code reviews and juniors through mentoring. Must have Skills : Sound understanding of Computer Science fundamentals including Data Structures and Space and Time complexity. Excellent problem solving skills Solid understanding of any of the modern Object oriented programming languages (like Java, Ruby or Python) and or Functional languages (like Scala, GoLang) Understanding of MPP (Massive parallel processing) and frameworks like Spark Experience working with Databases (RDBMS - Mysql, Redshift etc, NoSQL - Couchbase / MongoDB / Cassandra etc). Experience working with open source libraries and frameworks. Strong hold on versioning tools Git/Bitbucket. Good to have Skills : Knowledge of MicroServices architecture. Have experience working with Kafka Experience or Exposure to ORM frameworks (like ActiveRecord, SQLAlchemy etc). Working knowledge of full text search (like ElasticSearch, Solr etc). Skills AWS, Go Lang, Python
Posted 1 month ago
3.0 - 6.0 years
9 - 13 Lacs
Chennai
Remote
About the job : - As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. - You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. - This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What You'll Do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databricks platform. - Collaborate with architects and other teams to migrate current solution to use Databricks. - Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements. - Use best practices for data governance, security, and compliance on the Databricks platform. - Mentor junior engineers and provide technical guidance. - Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement. You'll Be Expected To Have : - Bachelor's or Master's degree in Computer Science, Engineering, or a related field. - 3 to 6 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform. - Proficiency in programming languages such as Python, Scala, or SQL. - Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark. - Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services. - Proven track record of delivering scalable and reliable data solutions in a fast-paced environment. - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration skills with the ability to work effectively in cross-functional teams. - Good to have experience with containerization technologies such as Docker and Kubernetes. - Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.
Posted 1 month ago
4.0 - 9.0 years
8 - 13 Lacs
Bengaluru
Work from Office
As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What you'll do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databricks platform. - Collaborate with architects and other teams to migrate current solution to use Databricks. - Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements. - Use best practices for data governance, security, and compliance on the Databricks platform. - Mentor junior engineers and provide technical guidance. - Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement. You'll be expected to have : - Bachelor's or master's degree in computer science, Engineering, or a related field. - 5 to 8 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform. - Proficiency in programming languages such as Python, Scala, or SQL. - Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark. - Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services. - Proven track record of delivering scalable and reliable data solutions in a fast-paced environment. - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration skills with the ability to work effectively in cross-functional teams. - Good to have experience with containerization technologies such as Docker and Kubernetes. - Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.
Posted 1 month ago
8.0 - 10.0 years
9 - 13 Lacs
Bengaluru
Work from Office
Role Responsibilities : - Design and implement data pipelines using MS Fabric. - Develop data models to support business intelligence and analytics. - Manage and optimize ETL processes for data extraction, transformation, and loading. - Collaborate with cross-functional teams to gather and define data requirements. - Ensure data quality and integrity in all data processes. - Implement best practices for data management, storage, and processing. - Conduct performance tuning for data storage and retrieval for enhanced efficiency. - Generate and maintain documentation for data architecture and data flow. - Participate in troubleshooting data-related issues and implement solutions. - Monitor and optimize cloud-based solutions for scalability and resource efficiency. - Evaluate emerging technologies and tools for potential incorporation in projects. - Assist in designing data governance frameworks and policies. - Provide technical guidance and support to junior data engineers. - Participate in code reviews and ensure adherence to coding standards. - Stay updated with industry trends and best practices in data engineering. Qualifications : - 8+ years of experience in data engineering roles. - Strong expertise in MS Fabric and related technologies. - Proficiency in SQL and relational database management systems. - Experience with data warehousing solutions and data modeling. - Hands-on experience in ETL tools and processes. - Knowledge of cloud computing platforms (Azure, AWS, GCP). - Familiarity with Python or similar programming languages. - Ability to communicate complex concepts clearly to non-technical stakeholders. - Experience in implementing data quality measures and data governance. - Strong problem-solving skills and attention to detail. - Ability to work independently in a remote environment. - Experience with data visualization tools is a plus. - Excellent analytical and organizational skills. - Bachelor's degree in Computer Science, Engineering, or related field. - Experience in Agile methodologies and project management.
Posted 1 month ago
10.0 - 12.0 years
19 - 25 Lacs
Mumbai
Remote
D365 Customer Insights (Data and Journeys) Technical Lead Location : Remote Experience : 9+ Years Responsibilities : - Lead and implement end-to-end technical solutions within D365 Customer Insights (Data and Journeys) to meet diverse client requirements, from initial design to deployment and support. - Design and configure CI data unification processes, including data ingestion pipelines, matching and merging rules, and segmentation models to create comprehensive and actionable customer profiles. - Demonstrate deep expertise in data quality management and ensuring data integrity. - Proficiency in integrating data with CI-Data using various methods, including standard connectors, API calls, and custom ETL pipelines (Azure Data Factory, SSIS). - Experience with different data sources and formats. - Hands-on experience with the Power Platform, including Power Automate (flows for data integration and automation), Power Apps (for custom interfaces and extensions), and Dataverse (data modeling and storage). - Strong skills in JavaScript, Power Fx, or other scripting languages relevant to CI customization and plugin development. - Ability to develop, test, and deploy custom functionalities, workflows, and plugins to enhance CI capabilities. - Proven experience in customer journey mapping, marketing automation, and campaign management using CI-Journeys. - Ability to design and implement personalized customer journeys based on data insights and business objectives. - Proactively troubleshoot and resolve technical issues within CI-Data and CI-Journeys environments, focusing on data integrity, performance optimization, and system stability. - Conduct root cause analysis and implement effective solutions. - Strong analytical and problem-solving skills, with experience leveraging CI data analytics to generate actionable business insights and recommendations. - Ability to translate data into compelling narratives and visualizations. - Excellent written and verbal communication skills for effectively liaising with stakeholders, clients, and internal teams. - Ability to clearly articulate technical concepts to both technical and non-technical audiences. - Provide technical mentorship and guidance to junior team members. - Contribute to knowledge sharing and best practices within the team. - Stay up-to-date with the latest D365 Customer Insights features, updates, and best practices. - Proactively seek opportunities to expand your technical knowledge and skills. Required Skills & Experience : - 9+ years of experience working with D365 Customer Insights (Data and Journeys), with a strong focus on technical implementation and configuration. - In-depth understanding of data unification, segmentation, and profile creation within CI-Data. - Proficiency in data integration with CI-Data using connectors, API calls, and custom ETL pipelines. - Hands-on experience with Power Platform tools (Power Automate, Power Apps, Dataverse). - Strong skills in JavaScript, Power Fx, or other scripting languages used for CI customization and plugin development. - Proven experience in customer journey mapping, marketing automation, and campaign management within CI-Journeys. - Deep understanding of marketing and customer experience principles and their application within CI. - Strong analytical and problem-solving skills, with experience using CI data analytics to drive business insights. - Excellent written and verbal communication skills.
Posted 1 month ago
4.0 - 9.0 years
8 - 13 Lacs
Kolkata
Work from Office
Role Senior Databricks Engineer As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What you'll do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databricks platform. - Collaborate with architects and other teams to migrate current solution to use Databricks. - Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements. - Use best practices for data governance, security, and compliance on the Databricks platform. - Mentor junior engineers and provide technical guidance. - Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement. You'll be expected to have : - Bachelor's or master's degree in computer science, Engineering, or a related field. - 5 to 8 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform. - Proficiency in programming languages such as Python, Scala, or SQL. - Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark. - Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services. - Proven track record of delivering scalable and reliable data solutions in a fast-paced environment. - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration skills with the ability to work effectively in cross-functional teams. - Good to have experience with containerization technologies such as Docker and Kubernetes. - Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.
Posted 1 month ago
10.0 - 12.0 years
19 - 25 Lacs
Chennai
Remote
Responsibilities : - Lead and implement end-to-end technical solutions within D365 Customer Insights (Data and Journeys) to meet diverse client requirements, from initial design to deployment and support. - Design and configure CI data unification processes, including data ingestion pipelines, matching and merging rules, and segmentation models to create comprehensive and actionable customer profiles. - Demonstrate deep expertise in data quality management and ensuring data integrity. - Proficiency in integrating data with CI-Data using various methods, including standard connectors, API calls, and custom ETL pipelines (Azure Data Factory, SSIS). - Experience with different data sources and formats. - Hands-on experience with the Power Platform, including Power Automate (flows for data integration and automation), Power Apps (for custom interfaces and extensions), and Dataverse (data modeling and storage). - Strong skills in JavaScript, Power Fx, or other scripting languages relevant to CI customization and plugin development. - Ability to develop, test, and deploy custom functionalities, workflows, and plugins to enhance CI capabilities. - Proven experience in customer journey mapping, marketing automation, and campaign management using CI-Journeys. - Ability to design and implement personalized customer journeys based on data insights and business objectives. - Proactively troubleshoot and resolve technical issues within CI-Data and CI-Journeys environments, focusing on data integrity, performance optimization, and system stability. - Conduct root cause analysis and implement effective solutions. - Strong analytical and problem-solving skills, with experience leveraging CI data analytics to generate actionable business insights and recommendations. - Ability to translate data into compelling narratives and visualizations. - Excellent written and verbal communication skills for effectively liaising with stakeholders, clients, and internal teams. - Ability to clearly articulate technical concepts to both technical and non-technical audiences. - Provide technical mentorship and guidance to junior team members. - Contribute to knowledge sharing and best practices within the team. - Stay up-to-date with the latest D365 Customer Insights features, updates, and best practices. - Proactively seek opportunities to expand your technical knowledge and skills. Required Skills & Experience : - 9+ years of experience working with D365 Customer Insights (Data and Journeys), with a strong focus on technical implementation and configuration. - In-depth understanding of data unification, segmentation, and profile creation within CI-Data. - Proficiency in data integration with CI-Data using connectors, API calls, and custom ETL pipelines. - Hands-on experience with Power Platform tools (Power Automate, Power Apps, Dataverse). - Strong skills in JavaScript, Power Fx, or other scripting languages used for CI customization and plugin development. - Proven experience in customer journey mapping, marketing automation, and campaign management within CI-Journeys. - Deep understanding of marketing and customer experience principles and their application within CI. - Strong analytical and problem-solving skills, with experience using CI data analytics to drive business insights. - Excellent written and verbal communication skills.
Posted 1 month ago
10.0 - 20.0 years
20 - 35 Lacs
Bengaluru
Remote
As the data engineering consultant, you should have the common traits and capabilities that are listed Essential Requirements and meet many of the capabilities listed in Desirable Requirements Essential Requirements and Skills 10+ years working with customers in the Data Analytics, Big Data and Data Warehousing field. 10+ years working with data modeling tools. 5+ years building data pipelines for large customers. 2+ years of experience working in the field of Artificial Intelligence that leverages Big Data. This should be in a customer-facing services delivery role. 3+ years of experience in Big Data database design. A good understanding of LLMs, prompt engineering, fine tuning and training. Strong knowledge of SQL, NoSQL and Vector databases. Experience with popular enterprise databases such as SQL Server, MySQL, Postgres and Redis is a must. Additionally experience with popular Vector Databases such as PGVector, Milvus and Elasticsearch is a requirement. Experience with major data warehousing providers such as Teradata. Experience with data lake tools such as Databricks, Snowflake and Starburst. Proven experience building data pipelines and ETLs for both data transformation and multiple data source data extraction. Experience with automation of the deployment and execution of these pipelines. Experience with tools such as Apache Spark, Apache Hadoop, Informatica and similar data processing tools. Proficient knowledge of Python and SQL is a must. Proven experience with building test procedures, ensuring the quality, reliability, performance, and scalability of the data pipelines. Ability to develop applications that expose Restful APIs for data querying and ingestion. Experience preparing training data for Large Language Model ingestion and training (e.g. through vector databases). Experience with integrating with RAG solutions and leveraging related tools such as Nvidia Guardrails. Ability to define and implement metrics for RAG solutions. Understanding of typical AI tooling ecosystem including knowledge and experience of Kubernetes, MLOps, LLMOps and AIOps tools. Ability to gain customer trust, ability to plan, organize and drive customer workshops. Good communication skills in English is a must. The ability to work in a highly efficient team using an Agile methodology such as Scrum or Kanban. Ability to have extended pairing sessions with customers, enabling knowledge transfers in complex domains. Ability to influence and interact with confidence and credibility at all levels within the Dell Technologies companies and with our customers, partners, and vendors. Experience working on project teams within a defined methodology while adhering to margin, planning and SOW requirements. Ability to be onsite during customer workshops and enablement sessions. Desirable Requirements and Skills Knowledge of industry widespread AI Studios and AI Workbenches is a plus. Experience building and using Information Retrieval (IR) frameworks to support LLM inferencing. Working knowledge of Linux is a plus. Knowledge of using Minio is appreciated. Experience using Lean and Iterative Deployment Methodologies. Working knowledge of cloud technologies is a plus. University Degree aligned to Data Engineering is a plus. In possession of relevant industry certifications e.g. Databricks Certified Data Engineer, Microsoft Certifications, etc.
Posted 1 month ago
5.0 - 6.0 years
4 - 8 Lacs
Bengaluru
Work from Office
- Architect and optimize distributed data processing pipelines leveraging PySpark for high-throughput, low-latency workloads. - Utilize the Apache big data stack (Hadoop, Hive, HDFS) to orchestrate ingestion, transformation, and governance of massive datasets. - Engineer fault-tolerant, production-grade ETL frameworks ensuring seamless scalability and system resilience. - Interface cross-functionally with Data Scientists and domain experts to translate analytical needs into performant data solutions. - Enforce rigorous data quality controls and lineage mechanisms to uphold auditability and regulatory compliance. - Contribute to core architectural design, implement clean and modular Python/Java code, and drive performance benchmarking at scale. Required Skills : - 5-7 years of experience. - Strong hands-on experience with PySpark for distributed data processing. - Deep understanding of Apache ecosystem (Hadoop, Hive, Spark, HDFS, etc.) - Solid grasp of data warehousing, ETL principles, and data modeling. - Experience working with large-scale datasets and performance optimization. - Familiarity with SQL and NoSQL databases. - Proficiency in Python and basic to intermediate knowledge of Java. - Experience in using version control tools like Git and CI/CD pipelines. Nice-to-Have Skills : - Working experience with Apache NiFi for data flow orchestration. - Experience in building real-time streaming data pipelines. - Knowledge of cloud platforms like AWS, Azure, or GCP. - Familiarity with containerization tools like Docker or orchestration tools like Kubernetes.
Posted 1 month ago
10.0 - 12.0 years
32 - 37 Lacs
Kolkata
Remote
Responsibilities : - Lead and implement end-to-end technical solutions within D365 Customer Insights (Data and Journeys) to meet diverse client requirements, from initial design to deployment and support. - Design and configure CI data unification processes, including data ingestion pipelines, matching and merging rules, and segmentation models to create comprehensive and actionable customer profiles. - Demonstrate deep expertise in data quality management and ensuring data integrity. - Proficiency in integrating data with CI-Data using various methods, including standard connectors, API calls, and custom ETL pipelines (Azure Data Factory, SSIS). - Experience with different data sources and formats. - Hands-on experience with the Power Platform, including Power Automate (flows for data integration and automation), Power Apps (for custom interfaces and extensions), and Dataverse (data modeling and storage). - Strong skills in JavaScript, Power Fx, or other scripting languages relevant to CI customization and plugin development. - Ability to develop, test, and deploy custom functionalities, workflows, and plugins to enhance CI capabilities. - Proven experience in customer journey mapping, marketing automation, and campaign management using CI-Journeys. - Ability to design and implement personalized customer journeys based on data insights and business objectives. - Proactively troubleshoot and resolve technical issues within CI-Data and CI-Journeys environments, focusing on data integrity, performance optimization, and system stability. - Conduct root cause analysis and implement effective solutions. - Strong analytical and problem-solving skills, with experience leveraging CI data analytics to generate actionable business insights and recommendations. - Ability to translate data into compelling narratives and visualizations. - Excellent written and verbal communication skills for effectively liaising with stakeholders, clients, and internal teams. - Ability to clearly articulate technical concepts to both technical and non-technical audiences. - Provide technical mentorship and guidance to junior team members. - Contribute to knowledge sharing and best practices within the team. - Stay up-to-date with the latest D365 Customer Insights features, updates, and best practices. - Proactively seek opportunities to expand your technical knowledge and skills. Required Skills & Experience : - 9+ years of experience working with D365 Customer Insights (Data and Journeys), with a strong focus on technical implementation and configuration. - In-depth understanding of data unification, segmentation, and profile creation within CI-Data. - Proficiency in data integration with CI-Data using connectors, API calls, and custom ETL pipelines. - Hands-on experience with Power Platform tools (Power Automate, Power Apps, Dataverse). - Strong skills in JavaScript, Power Fx, or other scripting languages used for CI customization and plugin development. - Proven experience in customer journey mapping, marketing automation, and campaign management within CI-Journeys. - Deep understanding of marketing and customer experience principles and their application within CI. - Strong analytical and problem-solving skills, with experience using CI data analytics to drive business insights. - Excellent written and verbal communication skills.
Posted 1 month ago
6.0 - 9.0 years
9 - 13 Lacs
Mumbai
Work from Office
Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.
Posted 1 month ago
4.0 - 6.0 years
13 - 17 Lacs
Mumbai
Work from Office
About the Role : We are seeking a highly skilled and passionate Senior Data Scientist to join our growing AI/ML team. In this role, you will play a crucial part in developing cutting-edge AI solutions within the healthcare domain. You will focus on building and refining sophisticated Large Language Models (LLMs) to address critical challenges and improve healthcare outcomes. Key Responsibilities : - Develop and refine LLMs tailored for specific healthcare applications (e.g., medical diagnosis, drug discovery, patient care). - Design and implement robust data pipelines for collecting, cleaning, and preparing high-quality healthcare datasets for LLM training. - Conduct thorough experimentation with different LLM architectures, hyperparameters, and training techniques. - Fine-tune pre-trained LLMs on specific healthcare tasks and datasets. - Implement techniques for improving LLM performance, such as prompt engineering, few-shot learning, and reinforcement learning. - Develop and implement rigorous evaluation metrics to assess the performance of LLMs on various healthcare tasks. - Conduct thorough model validation and testing to ensure accuracy, reliability, and safety. - Monitor model performance in production and identify areas for improvement. - Stay abreast of the latest advancements in LLM research and development. - Explore and implement cutting-edge techniques in natural language processing (NLP) and machine learning. - Conduct research and development on novel applications of LLMs in the healthcare domain. - Collaborate effectively with data engineers, clinicians, researchers, and other stakeholders. - Clearly communicate complex technical concepts to both technical and non-technical audiences. - Present research findings and project updates to team members and stakeholders. Qualifications : Essential : - Master's or Ph.D in Computer Science, Artificial Intelligence, Machine Learning, or a related field. - 3+ years of experience in developing and deploying machine learning models, with a strong focus on NLP and deep learning. - Proven experience in building and fine-tuning large language models. - Strong proficiency in Python and experience with deep learning frameworks (e.g., TensorFlow, PyTorch). - Experience with data preprocessing, feature engineering, and model evaluation techniques. - Excellent analytical and problem-solving skills. - Strong communication and interpersonal skills
Posted 1 month ago
3.0 - 6.0 years
9 - 13 Lacs
Mumbai
Work from Office
About the job : - As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. - You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. - This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What You'll Do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databricks platform. - Collaborate with architects and other teams to migrate current solution to use Databricks. - Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements. - Use best practices for data governance, security, and compliance on the Databricks platform. - Mentor junior engineers and provide technical guidance. - Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement. You'll Be Expected To Have : - Bachelor's or Master's degree in Computer Science, Engineering, or a related field. - 3 to 6 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform. - Proficiency in programming languages such as Python, Scala, or SQL. - Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark. - Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services. - Proven track record of delivering scalable and reliable data solutions in a fast-paced environment. - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration skills with the ability to work effectively in cross-functional teams. - Good to have experience with containerization technologies such as Docker and Kubernetes. - Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.
Posted 1 month ago
3.0 - 8.0 years
15 - 19 Lacs
Mumbai
Hybrid
Responsibilities : - Develop and maintain data pipelines using GCP. - Write and optimize queries in BigQuery. - Utilize Python for data processing tasks. - Manage and maintain SQL Server databases. Must-Have Skills : - Experience with Google Cloud Platform (GCP). - Proficiency in BigQuery query writing. - Strong Python programming skills. - Expertise in SQL Server. Good to Have : - Knowledge of MLOps practices. - Experience with Vertex AI. - Background in data science. - Familiarity with any data visualization tool
Posted 1 month ago
5.0 - 7.0 years
11 - 15 Lacs
Coimbatore
Work from Office
Mandate Skills : Apache spark, hive, Hadoop, spark, scala, Databricks The Role : - Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights. - Constructing infrastructure for efficient ETL processes from various sources and storage systems. - Leading the implementation of algorithms and prototypes to transform raw data into useful information. - Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations. - Creating innovative data validation methods and data analysis tools. - Ensuring compliance with data governance and security policies. - Interpreting data trends and patterns to establish operational alerts. - Developing analytical tools, programs, and reporting mechanisms - Conducting complex data analysis and presenting results effectively. - Preparing data for prescriptive and predictive modeling. - Continuously exploring opportunities to enhance data quality and reliability. - Applying strong programming and problem-solving skills to develop scalable solutions. Requirements : - Experience in the Big Data technologies (Hadoop, Spark, Nifi, Impala) - 5+ years of hands-on experience designing, building, deploying, testing, maintaining, monitoring, and owning scalable, resilient, and distributed data pipelines. - High proficiency in Scala/Java and Spark for applied large-scale data processing. - Expertise with big data technologies, including Spark, Data Lake, and Hive
Posted 1 month ago
7.0 - 12.0 years
10 - 20 Lacs
Kochi, Thiruvananthapuram
Hybrid
Collaborate with stakeholders to gather data needs, perform EDA, analyze large datasets, use stats to find trends, build Power BI dashboards, write SQL queries, ensure data quality, and present insights to support decisions and improve data systems.
Posted 1 month ago
3.0 - 6.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Duration : 6 Months Timings : General IST Notice Period : within 15 days or immediate joiner About the Role : As a Data Engineer for the Data Science team, you will play a pivotal role in enriching and maintaining the organization's central repository of datasets. This repository serves as the backbone for advanced data analytics and machine learning applications, enabling actionable insights from financial and market data. You will work closely with cross-functional teams to design and implement robust ETL pipelines that automate data updates and ensure accessibility across the organization. This is a critical role requiring technical expertise in building scalable data pipelines, ensuring data quality, and supporting data analytics and reporting infrastructure for business growth. Note : Must be ready for face-to-face interview in Bangalore (last round) - Should be working with Azure as cloud technology. Key Responsibilities : ETL Development : - Design, develop, and maintain efficient ETL processes for handling multi-scale datasets. - Implement and optimize data transformation and validation processes to ensure data accuracy and consistency. - Collaborate with cross-functional teams to gather data requirements and translate business logic into ETL workflows. Data Pipeline Architecture : - Architect, build, and maintain scalable and high-performance data pipelines to enable seamless data flow. - Evaluate and implement modern technologies to enhance the efficiency and reliability of data pipelines. - Build pipelines for extracting data via web scraping to source sector-specific datasets on an ad hoc basis. Data Modeling : - Design and implement data models to support analytics and reporting needs across teams. - Optimize database structures to enhance performance and scalability. Data Quality and Governance : - Develop and implement data quality checks and governance processes to ensure data integrity. - Collaborate with stakeholders to define and enforce data quality standards across the organization. Documentation and Communication : - Maintain detailed documentation of ETL processes, data models, and other key workflows. - Effectively communicate complex technical concepts to non-technical stakeholders and business users. Cross-Functional Collaboration : - Work closely with the Quant team and developers to design and optimize data pipelines. - Collaborate with external stakeholders to understand business requirements and translate them into technical solutions. Essential Requirements : Basic Qualifications : - Bachelor's degree in Computer Science, Information Technology, or a related field.Familiarity with big data technologies like Hadoop, Spark, and Kafka. - Experience with data modeling tools and techniques. - Excellent problem-solving, analytical, and communication skills. - Proven experience as a Data Engineer with expertise in ETL techniques (minimum years). - 3-6 years of strong programming experience in languages such as Python, Java, or Scala - Hands-on experience in web scraping to extract and transform data from publicly available web sources. - Proficiency with cloud-based data platforms such as AWS, Azure, or GCP. - Strong knowledge of SQL and experience with relational and non-relational databases. - Deep understanding of data warehousing concepts and architectures. Preferred Qualifications : - Master's degree in Computer Science or Data Science. - Knowledge of data streaming and real-time processing frameworks. - Familiarity with data governance and security best practices.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough