Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 - 12.0 years
20 - 25 Lacs
Bengaluru
Work from Office
Client calls, guide towards optimized, cloud-native architectures, future state of their data platform, strategic recommendations and Microsoft Fabric integration. Desired Skills and experience Candidates should have a B.E./B.Tech/MCA/MBA in Finance, Information Systems, Computer Science or a related field 7+ years of experience as a Data and Cloud architecture with client stakeholders AZ Data Platform Expertise: Synapse, Databricks, Azure Data Factory (ADF), Azure SQL (DW/DB), Power BI (PBI). Define modernization roadmaps and target architecture. Strong understanding of data governance best practices for data quality, Cataloguing, and lineage. Proven ability to lead client engagements and present complex findings. Excellent communication skills, both written and verbal Extremely strong organizational and analytical skills with strong attention to detail Strong track record of excellent results delivered to internal and external clients Able to work independently without the needs for close supervision and collaboratively as part of cross-team efforts Experience with delivering projects within an agile environment Experience in project management and team management Key responsibilities include: Lead all interviews workshops to capture current/future needs. Direct the technical review of Azure (AZ) infrastructure (Databricks, Synapse Analytics, Power BI) and critical on-premises (on-prem) systems. Come up with architecture designs (Arch. Designs), focusing on refined processing strategies and Microsoft Fabric. Understand and refine the Data Governance (Data Gov.) roadmap, including data cataloguing (Data Cat.), lineage, and quality. Lead project deliverables, ensuring actionable and strategic outputs. Evaluate and ensure quality of deliverables within project timelines Develop a strong understanding of equity market domain knowledge Collaborate with domain experts and business stakeholders to understand business rules/logics Ensure effective, efficient, and continuous communication (written and verbally) with global stakeholders Independently troubleshoot difficult and complex issues on dev, test, UAT and production environments Responsible for end-to-end delivery of projects, coordination between client and internal offshore teams and manage client queries Demonstrate high attention to detail, should work in a dynamic environment whilst maintaining high quality standards, a natural aptitude to develop good internal working relationships and a flexible work ethic Responsible for Quality Checks and adhering to the agreed Service Level Agreement (SLA) / Turn Around Time (TAT)
Posted 1 week ago
7.0 - 12.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Conduct technical analyses of existing data pipelines, ETL processes, and on-premises/cloud system, identify technical bottlenecks, evaluate migration complexities, and propose optimizations. Desired Skills and experience Candidates should have a B.E./B.Tech/MCA/MBA in Finance, Information Systems, Computer Science or a related field 7+ years of experience as a Data and Cloud architecture with client stakeholders Strong experience in Synapse Analytics, Databricks, ADF, Azure SQL (DW/DB), SSIS. Strong experience in Advanced PS, Batch Scripting, C# (.NET 3.0). Expertise on Orchestration systems with ActiveBatch and AZ orchestration tools. Strong understanding of data warehousing, DLs, and Lakehouse concepts. Excellent communication skills, both written and verbal Extremely strong organizational and analytical skills with strong attention to detail Strong track record of excellent results delivered to internal and external clients Able to work independently without the needs for close supervision and collaboratively as part of cross-team efforts Experience with delivering projects within an agile environment Experience in project management and team management Key responsibilities include: Understand and review PowerShell (PS), SSIS, Batch Scripts, and C# (.NET 3.0) codebases for data processes. Assess the complexity of trigger migration across Active Batch (AB), Synapse, ADF, and Azure Databricks (ADB). Define usage of Azure SQL DW, SQL DB, and Data Lake (DL) for various workloads, proposing transitions where beneficial. Analyze data patterns for optimization, including direct raw-to-consumption loading and zone elimination (e.g., stg/app zones). Understand requirements for external tables (Lakehouse) Lead project deliverables, ensuring actionable and strategic outputs. Evaluate and ensure quality of deliverables within project timelines Develop a strong understanding of equity market domain knowledge Collaborate with domain experts and business stakeholders to understand business rules/logics Ensure effective, efficient, and continuous communication (written and verbally) with global stakeholders Independently troubleshoot difficult and complex issues on dev, test, UAT and production environments Responsible for end-to-end delivery of projects, coordination between client and internal offshore teams and manage client queries Demonstrate high attention to detail, should work in a dynamic environment whilst maintaining high quality standards, a natural aptitude to develop good internal working relationships and a flexible work ethic Responsible for Quality Checks and adhering to the agreed Service Level Agreement (SLA) / Turn Around Time (TAT)
Posted 1 week ago
5.0 - 7.0 years
4 - 8 Lacs
Gurugram
Work from Office
Job Responsibilities Design and developing complex applications. An innovative, result-orientated individual, seeking challenges in order to utilize the knowledge and experience they have gained working across a number of clients. Development of real-time, multi-threaded application. Desired Skills and Experience Candidate Profile 5+ years of industry experience in software development using Java, Spring Boot and SQL. Proficient in using Java 8 features such as lambda expressions, streams, and functional interfaces. Experience with newer versions of Java and their enhancements. Strong understanding and practical experience with various data structures (arrays, linked lists, stacks, queues, trees, graphs) and algorithms (sorting, searching, dynamic programming, etc.). Experience in full software development lifecycle (SDLC) including requirements gathering, design, coding, testing, and deployment. Familiar with Spring, Hibernate, Maven, Gradle, and other Java-related frameworks and tools. Proficient in SQL and experience with databases like MySQL, PostgreSQL, or Oracle. Experience working with technologies such as Kafka, MongoDB, Apache Spark/DataBricks, and Azure Cloud Good experience of API/Microservices, Publisher/Subscriber and related data integration patterns Having experience in Unit Testing with Junit or any other similar framework Strong understanding of OOP and Design Patterns Working with users, senior management and stake holders across multiple disciplines Mentoring and developing technical colleagues. Code management knowledge (e.g., version control, code branching merging, continuous integration delivery, build deployment strategies, testing lifecycle) Experience in managing stakeholder expectations (client and project team) and generating relevant reports. Excellent project tracking and monitoring skills Good decision making and problem-solving skills. Adaptable, flexible and ability to prioritize and work in tight schedules. Ability to manage pressure, ambiguity and change. Good understanding of all knowledge areas in software development including requirement gathering, designing, development, testing, maintenance, quality control etc. Preferred experience with Agile methodology and knowledge of Financial Services/Asset Management Industry Ensure quality of deliverables within project timelines Independently manage daily client communication, especially over calls Drives the work towards completion with accuracy and timely deliverables. Good to have Financial Services knowledge Key Responsibilities A candidate needs to interact with the global financial clients regularly and will be responsible for final delivery of work including: Translate client requirements into actionable software solutions. Understand the business requirements from the customers. Direct and manage project development from beginning to end. Effectively communicate project expectations to team members in a timely and clear manner Communicate with relevant stakeholders on an ongoing basis. Identify and manage project dependencies and critical path. Guide the team to implement industry best practices. Working as a part of a team developing new enhancement and revamping the existing trade limit persistence and pre trade risk check micro services (LMS) based on the clients own low latency framework. Designing and developing the persistence cache layer which will use the MONGO persistence for storing Design and development work for SMS integration to send out the 2FA code and for other business reasons Migrating existing Couchbase DB based limit documents processing system to a new AMPS based processing micro service. Design and implement the system from scratch build enhancements, features request using Java and Springboot Build prototype of application solution as needed. Involve in both development maintenance of the systems. Work collaboratively in a global setting, should be eager to learn new technologies. Provide support for any implemented solutions including incident, problem, and defect management, and appropriately cross train other members so that they are able to support the solutions. Responsible for extending and maintaining existing codebase with focus on quality, re-usability, maintainability and consistency Independently troubleshoot difficult and complex issues on production and other environments Demonstrate high attention to detail, should work in a dynamic environment whilst maintaining high quality standards, a natural aptitude to develop good internal working relationships and a flexible work ethic. Responsible for Quality Checks and adhering to the agreed Service Level Agreement (SLA) / Turn Around Time (TAT)
Posted 1 week ago
3.0 - 7.0 years
4 - 8 Lacs
Bengaluru
Work from Office
As a member of the Data and Technology practice, you will be working on advanced AI ML engagements tailored for the investment banking sector. This includes developing and maintaining data pipelines, ensuring data quality, and enabling data-driven insights. Your core responsibility will be to build and manage scalable data infrastructure that supports our proof-of-concept initiatives (POCs) and full-scale solutions for our clients. You will work closely with data scientists, DevOps engineers, and clients to understand their data requirements, translate them into technical tasks, and develop robust data solutions. Your primary duties will encompass: Develop, optimize, and maintain scalable and reliable data pipelines using tools such as Python, SQL, and Spark. Integrate data from various sources including APIs, databases, and cloud storage solutions such as Azure, Snowflake, and Databricks. Implement data quality checks and ensure the accuracy and consistency of data. Manage and optimize data storage solutions, ensuring high performance and availability. Work closely with data scientists and DevOps engineers to ensure seamless integration of data pipelines and support machine learning model deployment. Monitor and optimize the performance of data workflows to handle large volumes of data efficiently. Create detailed documentation of data processes. Implement security best practices and ensure compliance with industry standards. Experience / Skills 5+ years of relevant experience in: Experience in a data engineering role , preferably within the financial services industry . Strong experience with data pipeline tools and frameworks such as Python, SQL, and Spark. Proficiency in cloud platforms, particularly Azure, Snowflake, and Databricks. Experience with data integration from various sources including APIs and databases. Strong understanding of data warehousing concepts and practices. Excellent problem-solving skills and attention to detail. Strong communication skills, both written and oral, with a business and technical aptitude. Additionally, desired skills: Familiarity with big data technologies and frameworks. Experience with financial datasets and understanding of investment banking metrics. Knowledge of visualization tools (e.g., PowerBI). Education Bachelors or Masters in Science or Engineering disciplines such as Computer Science, Engineering, Mathematics, Physics, etc.
Posted 1 week ago
4.0 - 6.0 years
5 - 9 Lacs
Gurugram
Work from Office
We are looking for an experienced Data Scientist to work at one of our global biopharma customers on a range of Biostats models consulting and development engagements. They are expected to bring extensive knowledge in best practices on R package development, model development and deployment on Databricks, collaboration with version control systems, and familiarity with other topics such as data architecture and cloud infrastructure. Desired Skills and experience Candidates should have a B.E./B.Tech/MCA/MBA in Finance, Information Systems, Computer Science or a related field Strong experience in R programming and package development Proficiency with GitHub and unit testing frameworks. Strong documentation and communication skills. A background or work experience in biostatistics or a similar discipline (Preferred). Expert knowledge in Survival Analysis (Preferred) Statistical model deployment, and end-to-end MLOps is nice to have. Having worked extensively on cloud infrastructure, preferably Databricks and Azure. Shiny development is nice to have. Can work with customer stakeholders to understand business processes and workflows and can design solutions to optimize processes via streamlining and automation. DevOps experience and familiarity with software release process. Familiar with agile delivery methods. Excellent communication skills, both written and verbal Extremely strong organizational and analytical skills with strong attention to detail Strong track record of excellent results delivered to internal and external clients Able to work independently without the needs for close supervision and also collaboratively as part of cross-team efforts Experience with delivering projects within an agile environment Key responsibilities include: Evaluate and document R packages, including metadata and user-focused use cases. Develop unit tests aligned with best practices in R package development. Collaborate closely with internal stakeholders. Design and implement technical solutions for Survival Analysis, based on statistical and business requirements Develop professional quality R packages. Provide consultancy on Biostats model development and deployment best practices. Review and optimize code, integrate existing modelling code into packages. Design and implement end to end modelling and deployment process on Databricks. Support and collaborate with adjacent teams (e.g. products, IT) to integrate the modelling solution. Continually innovate with the team and the customer on using modern tooling to improve model development and deployment. Demonstrate high attention to detail, should work in a dynamic environment whilst maintaining high quality standards, a natural aptitude to develop good internal working relationships and a flexible work ethic Responsible for Quality Checks and adhering to the agreed Service Level Agreement (SLA) / Turn Around Time (TAT)
Posted 1 week ago
4.0 - 6.0 years
2 - 6 Lacs
Gurugram
Work from Office
As a key member of the DTS team, you will primarily collaborate closely with a global leading hedge fund on data engagements. Partner with data strategy and sourcing team on data requirements to be directly working on processes that develop the inputs to our models. Migrate from MATLAB to Databricks moving to a more modern approach to update processes Desired Skills and Experience Essential skills 4-6 years of experience with data analytics Skilled in Databricks using SQL Working knowledge of Snowflake and Python Hands-on experience on large datasets and data structures using SQL Experience working with financial and/or alternative data products Excellent analytical and strong problem-solving skills Exposure on SP Capital IQ Exposure to data models on Databricks Education: B.E./B.Tech in Computer Science or related field Key Responsibilities Ability to write data processes in Databricks using SQL. Develop ELT processes for data preparation SQL expertise to understand data sources and data structures Document the developed data processes. Assist with related data tasks for model inputs within the Databricks environment. Assist with data tasks for model inputs within Databricks environment Taking data from SP Capital IQ, prepping it, and getting it ready for the model Key Metrics SQL, Databricks, Snowflake SP Capital IQ, Data Structures Behavioral Competencies Good communication (verbal and written) Experience in managing client stakeholders
Posted 1 week ago
9.0 - 14.0 years
35 - 55 Lacs
Noida
Hybrid
Looking For A Better Opportunity? Join Us and Make Things Happen with DMI a Encora company now....! Encora is seeking a full-time Lead Data Engineer with Logistic domian expertise to support our manufacturing large scale client in digital transformation. The Lead Data Engineer is responsible for ensuring the day-to-day leadership and guidance of the local, India-based, data team. This role will be the primary interface with the management team of the client and will work cross functionally with various IT functions to streamline project delivery. Minimum Requirements: l 8+ years of experience overall in IT l Current - 5+ years of experience on Azure Cloud as Data Engineer l Current - 3+ years of hands-on experience on Databricks/ AzureDatabricks l Proficient in Python/PySpark l Proficient in SQL/TSQL l Proficient in Data Warehousing Concepts (ETL/ELT, Data Vault Modelling, Dimensional Modelling, SCD, CDC) Primary Skills: Azure Cloud, Databricks, Azure Data Factory, Azure Synapse Analytics, SQL/TSQL, PySpark, Python + Logistic domain expertise Work Location: Noida, India (Candidates who are open for relocation on immediate basis can also apply) Interested candidates can apply at nidhi.dubey@encora.com along with their updated resume: 1. Total experience: 2.Relevant experience in Azure Cloud: 3. Relevant experience in Azure Databricks: 4. Relevant experience in Azure Syanspse: 5. Relevant experience in SQL/T-SQL: 6. Relevant experience in Pyspark: 7. Relevant experience in python: 8. Relevant experience in logistic domain: 9. Relevant experience in data warehosuing: 10. Current CTC: 11. Expected CTC: 12. Official Notice Period. if serving please specify LWD:
Posted 1 week ago
3.0 - 8.0 years
12 - 22 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Position :: Azure Databricks with UC ( Unity Catalog) Data Engineer ABD ADF Designing and building data models to support business requirements Developing and maintaining data ingestion and processing systems Implementing data storage solutions databases and data lakes Ensuring data consistency and accuracy through data validation and cleansing techniques Working together with crossfunctional teams to identify and address datarelated issues A relevant highereducation degree Proficiency in programming languages Python Java or Scala Familiarity with data integration and ETL tools such as Talend Informatica or Apache NiFi Strong problemsolving and analytical skills Excellent communication and collaboration abilities Analyzing raw data Developing and maintaining datasets Improving data quality and efficiency Analyze and organize raw data Build data systems and pipelines Evaluate business needs and objectives Skills Mandatory Skills : Azure Data Factory, Azure Functions, Azure Synapse Analytics, AZURE DATA LAKE, Databricks We are looking for immediate joiner !! Kindly send your resume to Kiruthikab@intellicsglobal.com or whatsapp me @ 9843156708
Posted 1 week ago
4.0 - 9.0 years
9 - 19 Lacs
Pune
Hybrid
Job Description Position: Pyspark Developer Mandatory skills: - Databricks, Python, Pyspark Location: Pune ( Kharadi) Experience: 5 to 9yrs 5 + years of focused and in-depth experience in implementing complex business analytics applications using big data technologies. A problem-solvers, with good analytical skills and a can-do attitude. Experience of various programming technologies and has good development sense, modular coding experience. Participated in various Hackathons and provided optimal & impactful solution to a real-world problem. Start-up mindset, has worked in big data start-ups using Spark or comes with boutique or captive firms knows for building big data solutions using Spark Extensive experience as a Data engineer with experience in designing, developing, and managing complex data pipelines which collects, store, process, and analyses large volumes of data Develop optimized data pipelines and make sure they are executed with high performance Experience in Linux Bash, PowerShell, and Python scripting Perform research to handle any problems faced while meeting the business objectives. Believer of Clean Code follows importance of various clean code approaches Conducts through unit test of the code, discover issues, and fix them diligently Worked with version control tools such as Git and SVN. Extensively worked on Python Programming, Spark coding using scala or PySpark Knowledge of cloud technologies especially Azure, Databricks, and Devops (Git, development of CI/CD pipelines, containers) would be an added advantage Design functional and technical architectures A team player practices in Peer code reviews, Pair programming Great communication abilities, both technically and non-technically. Can work independently with little or no supervision. Essential Skills: Strong big data technologies and MPP design, development skills Strong coding skills in Python, PySpark Strong code standardization skills
Posted 1 week ago
2.0 - 5.0 years
0 - 11 Lacs
Hyderabad, Bengaluru
Work from Office
TECH Mahindra is hiring for Data Engineer role. Primary Skills 1. Azure Data Bricks - (Unity Catalog,Schema,Notebook, Workflows ) 2. Pyspark and SQL 3. ETL/ELT concepts, Data Modelling techniques using galaxy/star/snowflake schema. Working with large data sets
Posted 1 week ago
4.0 - 9.0 years
16 - 20 Lacs
Pune
Work from Office
Azure Data Engineer Skills -SQL, ETL, AZURE, Python, Pyspark, Databricks Exp- min 4 Years Immediate Joiners - 45 days(max) Location- Pune UK Shifts CTC offered- 16-20 LPA Contact - divyam@genesishrs.com | 8905344933
Posted 1 week ago
4.0 - 9.0 years
15 - 20 Lacs
Pune
Work from Office
Job Role: Azure Data Engineer Job Location: Pune Experience: 4+ Yrs Skills: SQL + ETL + Azure + Python + Pyspark + Databricks Job Description: As an Azure Data Engineer, you will play a crucial role in designing, implementing, and maintaining our data infrastructure on the Azure platform. You will collaborate with cross-functional teams to develop robust data pipelines, optimize data workflows, and ensure data integrity and reliability. Responsibilities: Design, develop, and deploy data solutions on Azure, leveraging SQL Azure, Azure Data Factory, and Databricks. Build and maintain scalable data pipelines to ingest, transform, and load data from various sources into Azure data repositories. Implement data security and compliance measures to safeguard sensitive information. Collaborate with data scientists and analysts to support their data requirements and enable advanced analytics and machine learning initiatives. Optimize and tune data workflows for performance and efficiency. Troubleshoot data-related issues and provide timely resolution. Stay updated with the latest Azure data services and technologies and recommend best practices for data engineering. Qualifications: Bachelors degree in computer science, Information Technology, or related field. Proven experience as a data engineer, preferably in a cloud environment. Strong proficiency in SQL Azure for database design, querying, and optimization. Hands-on experience with Azure Data Factory for ETL/ELT workflows. Familiarity with Azure Databricks for big data processing and analytics. Experience with other Azure data services such as Azure Synapse Analytics, Azure Cosmos DB, and Azure Data Lake Storage is a plus. Solid understanding of data warehousing concepts, data modeling, and dimensional modeling. Excellent problem-solving and communication skills.
Posted 1 week ago
3.0 - 8.0 years
15 - 30 Lacs
Navi Mumbai, Pune
Work from Office
We're Hiring: Data Scientist Databricks & ML Deployment Expert Location: Mumbai/Pune Experience: 38 Years Apply Now! Are you passionate about deploying real-world machine learning solutions? We're looking for a versatile Data Scientist with deep expertise in Databricks, PySpark , and end-to-end ML deployment to drive impactful projects in the Retail and Automotive domains. What Youll Do Develop scalable ML models (Regression, Classification, Clustering) Deliver advanced use cases like CLV modeling , Predictive Maintenance , and Time Series Forecasting Design and automate ML workflows on Databricks using PySpark Build and deploy APIs to serve ML models (Flask, FastAPI, Django) Own model deployment and monitoring in production environments Work closely with Data Engineering and DevOps teams for CI/CD integration Optimize pipelines and model performance (code & infrastructure level) Must-Have Skills Strong hands-on with Databricks and PySpark Proven track record in ML model development & deployment (min. 2 production deployments) Solid grasp of Regression, Classification, Clustering & Time Series Proficiency in SQL , workflow automation, and ELT/ETL processes API development (Flask, FastAPI, Django) CI/CD, deployment automation, and ML pipeline optimization Familiarity with Medallion Architecture Domain Expertise Retail : CLV, Pricing, Demand Forecasting Automotive : Predictive Maintenance, Time Series Nice to Have MLflow, Docker, Kubernetes Cloud: Azure, AWS, or GCP If you're excited to build production-ready ML systems that create real business impact, we want to hear from you! Apply Now to chaity.mukherjee@celebaltech.com.
Posted 1 week ago
2.0 - 4.0 years
3 - 7 Lacs
Gurugram
Work from Office
Job Title: Data Engineer Experience: 24 years Location: Gurgaon Work from Office (Mon - Fri) Key Responsibilities: Design, develop, and maintain scalable data pipelines. Work with large datasets using Python and SQL. Build and optimize workflows in Databricks. Collaborate with cross-functional teams to integrate data solutions on AWS. Required Skills: Strong proficiency in Python and SQL . Hands-on experience with Databricks and AWS data services (e.g., S3, Glue, Redshift). Solid understanding of data modeling, ETL processes, and performance tuning. Good to Have: Experience in data warehousing and real-time data processing. Familiarity with CI/CD practices for data engineering.
Posted 1 week ago
7.0 - 12.0 years
13 - 17 Lacs
Noida
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Work with large, diverse datasets to deliver predictive and prescriptive analytics Develop innovative solutions using data modeling, machine learning, and statistical analysis Design, build, and evaluate predictive and prescriptive models and algorithms Use tools like SQL, Python, R, and Hadoop for data analysis and interpretation Solve complex problems using data-driven approaches Collaborate with cross-functional teams to align data science solutions with business goals Lead AI/ML project execution to deliver measurable business value Ensure data governance and maintain reusable platforms and tools Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Technical Skills Programming Languages: Python, R, SQL Machine Learning Tools: TensorFlow, PyTorch, scikit-learn Big Data Technologies: Hadoop, Spark Visualization Tools: Tableau, Power BI Cloud Platforms: AWS, Azure, Google Cloud Data Engineering: Talend, Data Bricks, Snowflake, Data Factory Statistical Software: R, Python libraries Version Control: Git Preferred Qualifications: Masters or PhD in Data Science, Computer Science, Statistics, or related field Certifications in data science or machine learning 7+ years of experience in a senior data science role with enterprise-scale impact Experience managing AI/ML projects end-to-end Solid communication skills for technical and non-technical audiences Demonstrated problem-solving and analytical thinking Business acumen to align data science with strategic goals Knowledge of data governance and quality standards At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone of every race, gender, sexuality, age, location and income deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes an enterprise priority reflected in our mission. #Nic
Posted 1 week ago
12.0 - 17.0 years
10 - 14 Lacs
Noida
Work from Office
At Optum AI, we leverage data and resources to make a significant impact on the healthcare system. Our solutions have the potential to improve healthcare for everyone. We work on cutting-edge projects involving ML, NLP, and LLM techniques, continuously developing and improving generative AI methods for structured and unstructured healthcare data. Our team collaborates with world-class experts and top universities to develop innovative AI/ML solutions, often leading to patents and published papers. Primary Responsibilities Develop and implement AI and machine learning strategies for several healthcare domains Collaborate with cross-functional teams to identify and prioritize AI and machine learning initiatives Manage the development and deployment of AI and machine learning solutions Develop and run pipelines for data ingress and model output egress Develop and run scripts for ML model inference Design, implement, and maintain CI/CD pipelines for MLOps and DevOps functions Identify technical problems and develop software updates and fixes Develop scripts or tools to automate repetitive tasks Automate the provisioning and configuration of infrastructure resources Provide guidance on the best use of specific tools or technologies to achieve desired results Create documentation for infrastructure design and deployment procedures Utilize AI/ML frameworks and tools such as MLFlow, TensorFlow, PyTorch, Keras, Scikit-learn, etc. Lead and manage AI/ML teams and projects from ideation to delivery and evaluation Apply expertise in various AI/ML techniques, including deep learning, NLP, computer vision, recommender systems, reinforcement learning, and large language models Communicate complex AI/ML concepts and results to technical and non-technical audiences effectively Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelors/master degree in computer science, engineering, mathematics, statistics, or a related discipline 12+ years of experience in Software Engineering, Data Science, or Analytics with 8+ years of experience in AI/ML engineering or related fields Experience with cloud platforms and services, such as AWS, Azure, GCP, etc. Experience in developing solutions in the NLP space and relevant projects Hands on Experience in AI and drive the development of innovative AI and machine learning solutions Demonstrated experience in leading and managing AI/ML teams and projects, from ideation to delivery and evaluation Experience with Azure development environments Knowledge of NLP literature, thrust areas, conference venues, and code repositories Familiarity with both open-source and OpenAI LLMs and RAG architecture Familiarity with UI tools like Streamlit, Flask, FAST APIs, Rest APIs, Docker containers Understanding of common NLP tasks such as text classification, entity recognition, entity extraction, and question answering Proficient in Python and one of PySpark or Scala. Familiarity with python tools for data processing Proficiency in multiple machine learning and AI techniques such as supervised, unsupervised, reinforcement learning, deep learning, and NLP Proficiency in Python, R, or other programming languages for data analysis and AI/ML development Proficiency in libraries such as Hugging Face and OpenAI API Proven ability to develop and deploy data pipelines, machine learning models, or applications on cloud platforms (Azure, Databricks, AzureML) Proven excellent communication, presentation, and interpersonal skills, with the ability to explain complex AI/ML concepts and results to technical and non-technical audiences Proven solid analytical, problem-solving, and decision-making skills, with the ability to balance innovation and pragmatism Proeven passion for learning and staying updated with the latest AI/ML trends and research At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyoneof every race, gender, sexuality, age, location and incomedeserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes an enterprise priority reflected in our mission. #njp
Posted 1 week ago
4.0 - 8.0 years
6 - 9 Lacs
Hyderabad
Work from Office
No of years experience Relevant 4-8 Years. Detailed job description - Skill Set: Attached Mandatory Skills* Power BI, DAX, Azure Data Factory, PySpark notebooks, Spark SQL, and Python Good to Have Skills Power BI, DAX,ETL Processes, SQL, Azure Data Factory, Data Lake, Azure Synapse, Azure SQL, Databricks etc
Posted 1 week ago
3.0 - 7.0 years
6 - 15 Lacs
Pune, Chennai
Hybrid
Company Description: Volante is on the Leading Edge of Financial Services technology, if you are interested to be on an Innovative fast- moving team that leverages the very best in Cloud technology our team may be right for you. By joining the product team at Volante, you will have an opportunity to shape the future of payments technology, with focus on payment intelligence. We are a financial technology business that provides a market leading, cloud native Payments Processing Platform to Banks and Financial institutions globally. Education Criteria: • B.E, MSc, M.E/MS in Computer Science or similar major. Relevant certification courses from reputed organization. Experience of 3+ years as a Data Engineer Responsibilities: • The role involves design and development of scalable solutions, payment analytics unlocking operational and business insight • Own data modeling, building ETL pipelines and enabling data driven metrics • Build and optimize data models for our application needs • Design & develop data pipelines and workflows that integrate data sources (structured, unstructured data) across the payment landscape • Assess customer's data infrastructure landscape (payment ancillary systems including Sanctions, Fraud, AML) across cloud environments like AWS, Azure as well as on-prem, for deployment design • Lead the enterprise application data architecture design, framework & services plus Identify and enable the services for SaaS environment in Azure and AWS • Implement customizations and data processing required to transform customer datasets that is needed for processing in our analytics framework/BI models • Monitor data processing, machine learning workflows to ensure customer data is successfully processed by our BI models, debugging and resolving any issues faced along the way • Optimize queries, warehouse, data lake costs • Review and provide feedback on Data Architecture Design Document/HLD for our SaaS application • Cross team collaboration to successfully integrate all aspects of the Volante PaaS solution • Mentor to the development team Skills: • 3+ years of data engineering experience data collection, preprocessing, ETL processes and Analytics • Proficiency in data engineering Architecture, Metadata management, Analytics, reporting and database administration • Strong in SQL/NoSQL, Python, JSON and data warehousing/data lake , orchestration, analytical tools • ETL or pipeline design & implementation of large data • Experience with data technologies, frameworks like Databricks, Synapse, Kafka, Spark, Elasticsearch • Knowledge of SCD, CDC, core data warehousing to develop a cost-effective, secure data collection, storage, and distribution of data for SaaS application • Experience in application deployment in AWS or Azure w/container, Kubernetes • Strong problem-solving skills and passion for building data at scale Job Description Engineering Skills (Desirable): • Knowledge of data visualization tools like Tableau • ETL Orchestration tools like Airflow and visualization tools like Grafana • Prior experience in Banking or Payments domain Location: India (Pune or Chennai)
Posted 1 week ago
2.0 - 6.0 years
0 - 1 Lacs
Pune
Work from Office
As Lead ML Engineer , you'll lead the development of predictive models for demand forecasting, customer segmentation, and retail optimization, from feature engineering through deployment. As Lead ML Engineer, you'll lead the development of predictive models for demand forecasting, customer segmentation, and retail optimization, from feature engineering through deployment. Responsibilities: Build and deploy models for forecasting and optimization Perform time-series analysis, classification, and regression Monitor model performance and integrate feedback loops Use AWS SageMaker, MLflow, and explainability tools (e.g., SHAP or LIME)
Posted 1 week ago
5.0 - 9.0 years
6 - 15 Lacs
Pune
Work from Office
Greeting from Infosys BPM Ltd., You are kindly invited for the Infosys BPM: Walk-In Drive on 21st June 2025 at Pune. Note : Please carry copy of this email to the venue and make sure you register your application before attending the walk-in. Please mention Candidate ID on top of the Resume Please use below link to apply and register your application.https://career.infosys.com/jobdesc?jobReferenceCod e = PROGEN-HRODIRECT-216785 Interview Information Interview Date: 21st June 2025 Interview Time: 10:00 Am till 01:00PM Interview Venu e: Taluka Mulshi, Plot No. 1, Pune, Phase 1, Building B1 Ground floor, Hinjewadi Rajiv Gandhi Infotech Park, Pune, Maharashtra-411057 Documents to Carry Please carry 2 set of updated CV (Hard Copy) Carry any 2 photo Identity proof (PAN Card mandatory /Driving License/Voters ID card/Passport ) About the Job We're seeking a skilled Azure Data Engineer to join our dynamic team and contribute to our data management and analytics initiatives. Job Role: Azure Data Engineer Job Location: Pune Experience: 5+ Yrs Skills: SQL + ETL + Azure + Python + Pyspark + Databricks Job Description: As an Azure Data Engineer, you will play a crucial role in designing, implementing, and maintaining our data infrastructure on the Azure platform. You will collaborate with cross-functional teams to develop robust data pipelines, optimize data workflows, and ensure data integrity and reliability. Responsibilities: Design, develop, and deploy data solutions on Azure, leveraging SQL Azure, Azure Data Factory, and Databricks. Build and maintain scalable data pipelines to ingest, transform, and load data from various sources into Azure data repositories. Implement data security and compliance measures to safeguard sensitive information. Collaborate with data scientists and analysts to support their data requirements and enable advanced analytics and machine learning initiatives. Optimize and tune data workflows for performance and efficiency. Troubleshoot data-related issues and provide timely resolution. Stay updated with the latest Azure data services and technologies and recommend best practices for data engineering. Qualifications: Bachelors degree in computer science, Information Technology, or related field. Proven experience as a data engineer, preferably in a cloud environment. Strong proficiency in SQL Azure for database design, querying, and optimization. Hands-on experience with Azure Data Factory for ETL/ELT workflows. Familiarity with Azure Databricks for big data processing and analytics. Experience with other Azure data services such as Azure Synapse Analytics, Azure Cosmos DB, and Azure Data Lake Storage is a plus. Solid understanding of data warehousing concepts, data modeling, and dimensional modeling. Excellent problem-solving and communication skills. Regards, Infosys BPM
Posted 1 week ago
5.0 - 10.0 years
10 - 20 Lacs
Bengaluru
Hybrid
•Strong experience as an AWS/Azure/GCP Data Engineer & must have AWS/Azure/GCP Databricks experience. •Expert proficiency in Spark Scala, Python, spark,ADF & SQL •Design & develop applications on Databricks. NP-Immediate Email- sachin@assertivebs.com
Posted 1 week ago
4.0 - 8.0 years
5 - 15 Lacs
Pune
Work from Office
• 4+ years of diverse experience in designing and managing innovative solutions. Experience in the Financial / FinTech industry is a plus • Proven experience in Python and web service integration skills. • Strong focus in TDD and/or BDD • hands-on experience with web/application development (html, css, apis rest, json) • good experience with relational DBs like Postgres, Oracle SQL, MS SQL, or MySQL. Non-relational DB experience is a plus! • knowledge in the Data Engineering space: distributed big data processing, data profiling, ETL / ELT workflows. Experience with Apache Spark or Databricks is recommended • knowledge of data management standards and data governance practices • proven experience in agile development methodology • azure cloud base technology is a plus
Posted 1 week ago
10.0 - 18.0 years
15 - 30 Lacs
Pune, Bengaluru
Work from Office
Role & responsibilities AWS with Databricks infra lead Experienced in setting up the Unity Catalog s Setting out how the group is to consume the model serving processes, Developing MLflow routines, Experienced ML models, Have used Gen AI features with guardrails, experimentation, and monitoring
Posted 1 week ago
3.0 - 5.0 years
15 - 30 Lacs
Bengaluru
Work from Office
Position summary: We are seeking a Senior Software Development Engineer – Data Engineering with 3-5 years of experience to design, develop, and optimize data pipelines and analytics workflows using Snowflake, Databricks, and Apache Spark. The ideal candidate will have a strong background in big data processing, cloud data platforms, and performance optimization to enable scalable data-driven solutions. Key Responsibilities: Work with cloud-based data solutions (Azure, AWS, GCP). Implement data modeling and warehousing solutions. Developing and maintaining data pipelines for efficient data extraction, transformation, and loading (ETL) processes. Designing and optimizing data storage solutions, including data warehouses and data lakes. Ensuring data quality and integrity through data validation, cleansing, and error handling. Collaborating with data analysts, data architects, and software engineers to understand data requirements and deliver relevant data sets (e.g., for business intelligence). Implementing data security measures and access controls to protect sensitive information. Monitor and troubleshoot issues in data pipelines, notebooks, and SQL queries to ensure seamless data processing. Develop and maintain Power BI dashboards and reports. Work with DAX and Power Query to manipulate and transform data. Basic Qualifications Bachelor’s or master’s degree in computer science or data science 3-5 years of experience in data engineering, big data processing, and cloud-based data platforms. Proficient in SQL, Python, or Scala for data manipulation and processing. Proficient in developing data pipelines using Azure Synapse, Azure Data Factory, Microsoft Fabric. Experience with Apache Spark, Databricks and Snowflake is highly beneficial for handling big data and cloud-based analytics solutions. Preferred Qualifications Knowledge of streaming data processing (Apache Kafka, Flink, Kinesis, Pub/Sub). Experience in BI and analytics tools (Tableau, Power BI, Looker). Familiarity with data observability tools (Monte Carlo, Great Expectations). Contributions to open-source data engineering projects.
Posted 1 week ago
6.0 - 11.0 years
18 - 33 Lacs
Pune, Bengaluru
Work from Office
Role: Data Engineer Experience: 6-8 Years Relevant Experience in Data Engineer : 6+ Years Notice Period: Immediate Joiners Only Job Location: Pune and Bangalore Key Responsibilities: Mandate Skills Strong - Pyspark (Programming) Databricks Technical and professional skills: We are looking for a flexible, fast learning, technically strong Data Engineer. Expertise is required in the following fields: Proficient in Cloud Services Azure Architect and implement ETL and data movement solutions. Design and implement data solutions using medallion architecture, ensuring effective organization and flow of data through bronze, silver, and gold layers. Optimize data storage and processing strategies to enhance performance and data accessibility across various stages of the medallion architecture. Collaborate with data engineers and analysts to define data access patterns and establish efficient data pipelines. Develop and oversee data flow strategies to ensure seamless data movement and transformation across different environments and stages of the data lifecycle. Migrate data from traditional database systems to Cloud environment Strong hands-on experience for working with Streaming dataset Building Complex Notebook in Databricks to achieve business Transformations. Hands-on Expertise in Data Refinement using Pyspark and Spark SQL Familiarity with building dataset using Scala. Familiarity with tools such as Jira and GitHub Experience leading agile scrum, sprint planning and review sessions Good communication and interpersonal skills Reach us:If you are interested in this position and meet the above qualifications, please reach out to me directly at swati@cielhr.com and share your updated resume highlighting your relevant experience.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane