Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 5.0 years
1 - 2 Lacs
Noida, Kolkata, Bengaluru
Work from Office
Azure Data Engineer Pyspark Python Azure Data Bricks Azure Data Factory SQL
Posted 2 weeks ago
10.0 - 15.0 years
9 - 10 Lacs
Hyderabad
Remote
Job Title: Senior Azure Health Data Services (AHDS) Engineer Location: Remote Duration: 5 Months (800+ hours) Project Overview We are seeking a highly skilled Senior Azure Health Data Services (AHDS) Engineer to support a client engagement focused on migrating and scaling healthcare data infrastructure on Azure. This contract will span multiple phases total 700+ hours and will involve complex FHIR migration, implementation of payer/provider APIs, and compliance with healthcare data standards. Phase 1 Responsibilities (Approx. 400 hours) Migrate existing FHIR services to Azure Health Data Services (AHDS) Apply security layers, manage platform performance tuning Plan and execute regression and conformance testing Coordinate and manage sprint releases Finalize client requirements and align consent model Maintain and update living specification documentation Phase 2 Responsibilities (Approx. 300 hours) Implement Payer-to-Payer, Provider, and Prior Authorization APIs on AHDS Optimize the platform for scalability and performance Conduct system and release testing of new APIs Manage the deployment calendar Update technical documentation and ensure traceability for new APIs Gather and document change requests Required Skills & Experience 10+ years of hands-on experience with Microsoft Azure solutions Proven experience with Azure Health Data Services including: FHIR APIs DICOM services HL7v2 ingestion and transformation Strong understanding of healthcare data standards, compliance, and security (HIPAA, HITRUST, GDPR) Proficiency in: Azure Functions API Management Logic Apps Data Factory Azure Key Vault, Event Hubs, Event Grid Experience with testing frameworks, conformance testing, and API validation Preferred Qualifications Microsoft Certifications (e.g., Azure Solutions Architect, Azure Developer, Health Data Services) Experience in building and scaling payer/provider data exchange APIs Familiarity with consent models, interoperability standards, and healthcare lifecycle integration Additional Information Work is fully remote; (offshore and onshore are eligible) Contract is fully approved, ready to onboard
Posted 2 weeks ago
4.0 - 5.0 years
4 - 8 Lacs
Hyderabad
Hybrid
Role & responsibilities: Write and understand complex SQL queries, including joins, CTE, to extract and manipulate the data. Manage and monitor the data pipelines in Azure Data Factory and Databricks. Manage to understand the integration with Veeva CRM and Salesforce Ensure data quality checks and business level checks in data pipeline of ADF and Databricks. Monitor and trouble shoot data and pipeline related issues, perform root cause analysis and implement corrective actions. Monitoring of system performance and reliability, troubleshooting issues and ensuring data delivery before committed SLA. Strong Understanding of Azure DevOps and Release process (Dev to Ops Handover) Preferred candidate profile : Good communication skills and the ability to work effectively in a team environment. Collaborate with business stakeholders and other technical member to provide operational activities related services. Strong Documentation Skills Note: Work Shift Time will be (11:30 am IST 8:30 pm IST) or (4:30 pm IST to 2:30 am IST)
Posted 2 weeks ago
3.0 - 8.0 years
3 - 7 Lacs
Lucknow
Work from Office
Azure Data Factory: - Develop Azure Data Factory Objects - ADF pipeline, configuration, parameters, variables, Integration services runtime - Hands-on knowledge of ADF activities(such as Copy, SP, lkp etc) and DataFlows - ADF data Ingestion and Integration with other services Azure Databricks: - Experience in Big Data components such as Kafka, Spark SQL, Dataframes, HIVE DB etc implemented using Azure Data Bricks would be preferred. - Azure Databricks integration with other services - Read and write data in Azure Databricks - Best practices in Azure Databricks Synapse Analytics: - Import data into Azure Synapse Analytics with and without using PolyBase - Implement a Data Warehouse with Azure Synapse Analytics - Query data in Azure Synapse Analytics
Posted 2 weeks ago
4.0 - 9.0 years
6 - 10 Lacs
Kolkata
Work from Office
- Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub. - Experience in migrating on-premise data warehouses to data platforms on AZURE cloud. - Designing and implementing data engineering, ingestion, and transformation functions - Azure Synapse or Azure SQL data warehouse - Spark on Azure is available in HD insights and data bricks - Good customer communication. - Good Analytical skill
Posted 2 weeks ago
7.0 - 12.0 years
14 - 24 Lacs
Visakhapatnam
Work from Office
Skills: SQL data structures,,DataBricks,Azure data lake,pyspark,Snowflake,HADOOP,SPARK Experience: 8+ Years Areas of expertise: SQL Server, Azure Data factory, Pyspark, Microsoft Fabric, Data modeling, Azure Dev ops Develop metadata-driven pipelines in Azure Data Factory to promote reusability, scalability, and parameterization. Design and implement ETL solutions to extract, transform, and load data from SQL Server into Azure and Microsoft Fabric environments (Lake house/Warehouse) Build and manage incremental load, full load, and CDC-based pipelines using ADF or PySpark. Design & Implement Medallion architecture data platform Integrate ADF with Azure Dev Ops for version control, CI/CD automation, and release management workflows. Strong understanding of SQL data structures. Proficiency with Azure Data Factory (ADF) for data integration and orchestration. Experience in building scalable and efficient data warehouse and marts Understand star scheme & snowflake schema for DW models. Familiarity with Microsoft Fabric and its applications in data architecture. Experience with other Azure data services such as Azure Synapse Analytics, Azure SQL Database, and Azure Data Lake Storage. Experience with big data technologies like Hadoop, Spark, and Databricks Excellent problem-solving skills and attention to detail. Strong communication and interpersonal skills. Ability to work collaboratively in a fast-paced, team-oriented environment.
Posted 2 weeks ago
4.0 - 6.0 years
15 - 25 Lacs
Pune, Bengaluru, Mumbai (All Areas)
Hybrid
Data Engineer with min 4+ years of experience strong with any cloud platform, ETL tools and scripting languages with atleast one basic cloud certification. Contact\Whatsapp: +919985831110 \ prashanth@livecjobs.com *JOB IN BANGALORE, PUNE, MUMBAI* Required Candidate profile Experience in any AWS, Azure, GCP SQL Any ETL tool Python or UNIX shell scripting Certification: Any basic Cloud (AWS, Azure, GCP) Certification
Posted 2 weeks ago
2.0 - 4.0 years
7 - 11 Lacs
Hyderabad
Work from Office
Overview Seeking an Associate Manager, Data Operations, to support our growing data organization. In this role, you will assist in maintaining data pipelines and corresponding platforms (on-prem and cloud) while working closely with global teams on DataOps initiatives. Support the day-to-day operations of data pipelines, ensuring data governance, reliability, and performance optimization on Microsoft Azure. Hands-on experience with Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and real-time streaming architectures is preferred. Assist in ensuring the availability, scalability, automation, and governance of enterprise data pipelines supporting analytics, AI/ML, and business intelligence. Contribute to DataOps programs, aligning with business objectives, data governance standards, and enterprise data strategy. Help implement real-time data observability, monitoring, and automation frameworks to improve data reliability, quality, and operational efficiency. Support the development of governance models and execution roadmaps to enhance efficiency across Azure, AWS, GCP, and on-prem environments. Work on CI/CD integration, data pipeline automation, and self-healing capabilities to improve enterprise-wide DataOps processes. Collaborate with cross-functional teams to support and maintain next-generation Data & Analytics platforms while promoting an agile and high-performing DataOps culture. Assist in the adoption of Data & Analytics technology transformations, ensuring automation for proactive issue identification and resolution. Partner with cross-functional teams to support process improvements, best practices, and operational efficiencies within DataOps. Responsibilities Assist in the implementation and optimization of enterprise-scale data pipelines using Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and Azure Stream Analytics. Support data ingestion, transformation, orchestration, and storage workflows, ensuring data reliability, integrity, and availability. Help ensure seamless batch, real-time, and streaming data processing, focusing on high availability and fault tolerance. Contribute to DataOps automation efforts, including CI/CD for data pipelines, automated testing, and version control using Azure DevOps and Terraform. Collaborate with Data Engineering, Analytics, AI/ML, CloudOps, and Business Intelligence teams to support data-driven decision-making. Assist in aligning DataOps practices with regulatory and security requirements by working with IT, data stewards, and compliance teams. Support data operations and sustainment activities, including testing and monitoring processes for global products and projects. Participate in data capture, storage, integration, governance, and analytics efforts, working alongside cross-functional teams. Assist in managing day-to-day DataOps activities, ensuring adherence to service-level agreements (SLAs) and business requirements. Engage with SMEs and business stakeholders to ensure data platform capabilities align with business needs. Contribute to Agile work intake and execution processes, helping to maintain efficiency in data platform teams. Help troubleshoot and resolve issues related to cloud infrastructure and data services in collaboration with technical teams. Support the development and automation of operational policies and procedures, improving efficiency and resilience. Assist in incident response and root cause analysis, contributing to self-healing mechanisms and mitigation strategies. Foster a customer-centric approach, advocating for operational excellence and continuous improvement in service delivery. Help build a collaborative, high-performing team culture, promoting automation and efficiency within DataOps. Adapt to shifting priorities and support cross-functional teams in maintaining productivity and achieving business goals. Utilize technical expertise in cloud and data operations to support service reliability and scalability. Qualifications 5+ years of technology work experience in a large-scale global organization, with CPG industry experience preferred. 5+ years of experience in Data & Analytics roles, with hands-on expertise in data operations and governance. 2+ years of experience working within a cross-functional IT organization, collaborating with multiple teams. Experience in a lead or senior support role, with a focus on DataOps execution and delivery. Strong communication skills, with the ability to collaborate with stakeholders and articulate technical concepts to non-technical audiences. Analytical and problem-solving abilities, with a focus on prioritizing customer needs and operational improvements. Customer-focused mindset, ensuring high-quality service delivery and operational efficiency. Growth mindset, with a willingness to learn and adapt to new technologies and methodologies in a fast-paced environment. Experience supporting data operations in a Microsoft Azure environment, including data pipeline automation. Familiarity with Site Reliability Engineering (SRE) principles, such as monitoring, automated issue remediation, and scalability improvements. Understanding of operational excellence in complex, high-availability data environments. Ability to collaborate across teams, building strong relationships with business and IT stakeholders. Basic understanding of data management concepts, including master data management, data governance, and analytics. Knowledge of data acquisition, data catalogs, data standards, and data management tools. Strong execution and organizational skills, with the ability to follow through on operational plans and drive measurable results. Adaptability in a dynamic, fast-paced environment, with the ability to shift priorities while maintaining productivity.
Posted 2 weeks ago
9.0 - 14.0 years
25 - 40 Lacs
Noida, Bengaluru
Hybrid
Role & responsibilities We are seeking an experienced and visionary Technical Expert (Architect) with deep expertise in Microsoft technologies and a strong focus on Microsoft Analytics solutions. The ideal candidate will design, implement, and optimize end-to-end analytics architectures, enabling organizations to derive actionable insights from their data. This role requires a blend of technical prowess, strategic thinking, and leadership capabilities to guide teams and stakeholders toward innovative solutions. Key Responsibilities Architectural Design: Lead the design and development of scalable and secure data analytics architectures using Microsoft technologies (e.g., Power BI, Azure Synapse Analytics, SQL Server). Define the data architecture, integration strategies, and frameworks to meet organizational goals. Technical Leadership: Serve as the technical authority on Microsoft Analytics solutions, ensuring best practices in performance, scalability, and reliability. Guide cross-functional teams in implementing analytics platforms and solutions. Solution Development: Oversee the development of data models, dashboards, and reports using Power BI and Azure Data Services. Implement data pipelines leveraging Azure Data Factory, Data Lake, and other Microsoft technologies. Stakeholder Engagement: Collaborate with business leaders to understand requirements and translate them into robust technical solutions. Present architectural designs, roadmaps, and innovations to technical and non-technical audiences. Continuous Optimization: Monitor and optimize analytics solutions for performance and cost-effectiveness. Stay updated on the latest Microsoft technologies and analytics trends to ensure the organization remains competitive. Mentorship and Training: Mentor junior team members and provide technical guidance on analytics projects. Conduct training sessions to enhance the technical capabilities of internal teams. Required Skills and Qualifications Experience: 9+ years of experience working with Microsoft analytics and related technologies. Proven track record of designing and implementing analytics architectures. Technical Expertise: Deep knowledge of Power BI, Azure Synapse Analytics, Azure Data Factory, SQL Server, Azure Data Lake and Fabric. Proficiency in data modeling, ETL processes, and performance tuning. Soft Skills: Strong problem-solving and analytical abilities. Excellent communication and interpersonal skills for stakeholder management. Certifications (Preferred): Microsoft Certified: Azure Solutions Architect Expert Microsoft Certified: Data Analyst Associate Microsoft Certified: Azure Data Engineer Associate
Posted 2 weeks ago
15.0 - 20.0 years
10 - 14 Lacs
Hyderabad
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Microsoft Azure Databricks Good to have skills : Microsoft Azure ArchitectureMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that application development aligns with business objectives, overseeing project timelines, and facilitating communication among stakeholders to drive project success. You will also engage in problem-solving activities, ensuring that the applications meet the required standards and specifications while fostering a collaborative environment for your team. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Continuously assess and improve application performance and user experience. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Databricks.- Good To Have Skills: Experience with Microsoft Azure Architecture.- Strong understanding of cloud computing principles and practices.- Experience in application design and development methodologies.- Familiarity with DevOps practices and tools for continuous integration and deployment. Additional Information:- The candidate should have minimum 7.5 years of experience in Microsoft Azure Databricks.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 2 weeks ago
3.0 - 8.0 years
9 - 13 Lacs
Chennai
Work from Office
Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Microsoft Power Business Intelligence (BI), Microsoft Azure DatabricksMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, encompassing the relevant data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, while also engaging in discussions to refine and enhance the data architecture. You will be involved in analyzing data requirements and translating them into effective solutions, ensuring that the data platform meets the needs of various stakeholders. Additionally, you will participate in team meetings to share insights and contribute to the overall strategy of the data platform. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Engage in continuous learning to stay updated with the latest trends and technologies in data platforms.- Assist in the documentation of data architecture and integration processes. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Good To Have Skills: Experience with Microsoft Power Business Intelligence (BI), Microsoft Azure Databricks.- Strong understanding of data integration techniques and methodologies.- Experience with data modeling and database design principles.- Familiarity with cloud-based data solutions and architectures. Additional Information:- The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Chennai office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 2 weeks ago
15.0 - 20.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : PySpark, Microsoft Azure Databricks, Microsoft Azure Analytics Services, Microsoft Azure Data Services Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Continuously evaluate and improve data processes to enhance efficiency. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark, Microsoft Azure Databricks, Microsoft Azure Data Services, Microsoft Azure Analytics Services.- Strong experience in designing and implementing data pipelines.- Proficient in data modeling and database design.- Familiarity with data warehousing concepts and technologies.- Experience with data quality and data governance practices. Additional Information:- The candidate should have minimum 5 years of experience in PySpark.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 2 weeks ago
8.0 - 11.0 years
30 - 40 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Job Summary: We are looking for an experienced and visionary Lead Data Engineer to architect and drive the development of scalable, secure, and high-performance data solutions . This role requires deep technical expertise in Python, Apache Spark, Delta Lake, and orchestration tools like Databricks Workflows or Azure Data Factory . The ideal candidate will also bring a strong understanding of data governance, metadata management, and regulatory compliance in the insurance and financial services domains. Proficient in developing Python Applications, Spark-based workflows, leveraging Delta Lake, and orchestrating jobs using Databricks Workflows or Azure Data Factory. Good to understand retention metadata, business rules, and data governance policies into reusable pipelines. Strong understanding of data privacy, security, and regulatory needs in insurance and financial domains. Key Responsibilities: Lead the design and architecture of end-to-end data engineering solutions across cloud platforms. Develop and oversee robust data pipelines and ETL workflows using Python and Apache Spark. Architect and implement scalable Delta Lake solutions for structured and semi-structured data. Orchestrate complex workflows using Databricks Workflows or Azure Data Factory. Translate business rules, retention metadata, and data governance policies into reusable, modular, and scalable pipeline components. Ensure adherence to data privacy, security, and compliance standards (e.g., GDPR, HIPAA, etc.). Mentor and guide junior data engineers, fostering best practices in coding, testing, and deployment. Collaborate with cross-functional teams including data architects, analysts, and business stakeholders to align data solutions with business goals. Drive performance optimization, cost-efficiency, and innovation in data engineering practices. Required Skills & Qualifications: 8+ years of experience in data engineering, with at least 2 years in a lead or architect role. Expert-level proficiency in Python, Apache Spark, and Delta Lake. Strong experience with Databricks Workflows and/or Azure Data Factory. Deep understanding of data governance, metadata management, and business rule integration. Proven track record in implementing data privacy, security, and regulatory compliance in insurance or financial domains. Strong leadership, communication, and stakeholder management skills. Experience with cloud platforms such as Azure, AWS, or GCP. Preferred Qualifications: Experience with CI/CD pipelines and DevOps practices in data engineering. Familiarity with data cataloging and data quality tools. Certifications in Azure Data Engineering or related technologies. Exposure to enterprise data architecture and modern data stack tools.
Posted 2 weeks ago
4.0 - 9.0 years
0 - 0 Lacs
Pune, Chennai, Bengaluru
Work from Office
Hexaware Technologies Location - Chennai, Pune, Bengaluru, Mumbai, Coimbatore Role & responsibilities Test Lead Data ETL Testing with Cloud (Azure) ETL Tester with 3.6+ Years of Cloud Data testing experience • Experience in data migration and ETL transformation testing • Exposure to Azure Data lakes and data bricks is must • Develop technical specifications for data loading rules • Mentor and provide technical guidance for Software Engineers] • Adhere to and produce artifacts (technical specifications, architectural specifications, data models, installation and qualification guides, and release notes) for the software development process. • Provide application support for existing products • Help with release management, using versioning tools such as SVN, and executing live deployments • Demonstrated ability to liaise with multiple stakeholders • Experience working in Agile development software methodology, authoring technical documents / specifications Preferred candidate profile Any
Posted 2 weeks ago
10.0 - 15.0 years
15 - 30 Lacs
Pallavaram
Work from Office
Data Engineering Lead Company Name: Blackstraw.ai Oce Location: Chennai (Work from Office) Job Type: Full-time Experience: 10 - 15 Years Candidates who can join immediately will be preferred. Job Description: As a lead data engineer you will oversee data architecture, ETL processes, and analytics pipelines, ensuring efficiency, scalability, and quality. Key Responsibilities: Working with clients to understand their data. Based on the understanding you will be building the data structures and pipelines. You will be working on the application from end to end collaborating with UI and other development teams. You will be working with various cloud providers such as Azure & AWS. You will be engineering data using the Hadoop/Spark ecosystem. You will be responsible for designing, building, optimizing and supporting new and existing data pipelines. Orchestrating jobs using various tools such Oozie, Airflow, etc. Developing programs for cleaning and processing data. You will be responsible for building the data pipelines to migrate and load the data into the HDFS either on-prem or in the cloud. Developing Data ingestion/process/integration pipelines effectively. Creating Hive data structures,metadata and loading the data into data lakes / BigData warehouse environments. Optimized (Performance tuning) many data pipelines effectively to minimize cost. Code versioning control and git repository is up to date. You should be able to explain the data pipeline to internal and external stakeholders. You will be responsible for building and maintaining CI/CD of the data pipelines. You will be managing the unit testing of all data pipelines. Tech Stack: Minimum of 5+ years working experience with Spark, Hadoop eco systems. Minimum of 4+ years working experience on designing data streaming pipelines. Should be an expert in either Python/Scala/Java. Should have experience in Data Ingestion and Integration into data lake using hadoop ecosystem tools such as Sqoop, Spark, SQL, Hive, Airflow, etc.. Should have experience optimizing (Performance tuning) data pipelines. Should have minimum experience of 3+ years on NoSQL and Spark Streaming. Knowledge of Kubernetes and Docker is a plus. Should have experience with Cloud services either Azure/AWS. Should have experience with on-prem distribution such as Cloudera/HortonWorks/MapR. Basic understanding of CI/CD pipelines. Basic knowledge of Linux environment and commands. Preferred Qualifications: Bachelors degree in computer science or related field. Proven experience with big data ecosystem tools such as Sqoop, Spark, SQL, API, Hive, Oozie, Airflow, etc.. Solid experience in all phases of SDLC with 10+ years of experience (plan, design, develop, test, release, maintain and support) Hands-on experience using Azures data engineering stack. Should have implemented projects using programming languages such as Scala or Python. Working experience on SQL complex data merging techniques such as windowing functions etc.. Hands-on experience with on-prem distribution tools such as Cloudera/HortonWorks/MapR. Should have excellent communication, presentation and problem solving skills. Key Traits: Should have excellent communication skills. Should be self motivated and willing to work as part of a team. Should be able to collaborate and coordinate with on shore and offshore teams. Be a problem solver and be proactive to solve the challenges that come his way.
Posted 2 weeks ago
12.0 - 20.0 years
35 - 50 Lacs
Gurugram, Chennai, Bengaluru
Work from Office
Role & responsibilities You Will- Owns the roadmap, timelines and delivery of data engineering and data science work streams by building end-to-end schedules, and managing cross team and cross functional project timelines in collaboration with engineering management, product management and business stakeholders. Lead multiple data-solution programs covering data pipelines, visualizations, data alerts, advanced analytics and machine learning methods, translating raw data into strategic insights and recommendations for leadership and business teams. Lead the technical delivery, implementation, and business adoption of new scalable and reliable data analytics, and business intelligence solutions for cross-functional teams Is the custodian of agile and scrum processes. Conducts retrospectives, understands best practices, drives process improvements, finds new ways of operating with a focus on engineering efficiency and simplicity of processes. Ensures that the team is adhering to estimates, schedule and agreed quality parameters of their tasks. Is proficient in creating quarterly and sprint wise plans, and sprint delivery reports and is able to drive improvements on any deviations from set goals. Manages risks and issues to closure. Manages and tracks all action items with respective stakeholders and brings it to closure. Collaborates across teams to work with technology vendors to enable financial plans, operating plans, vendor onboarding and continuous monitoring of performance and cost. Creates presentations based on multiple sources of data, brings out insights from the data, recommends actions and plans for their execution. Preferred candidate profile You Have: Experience as a software developer and as a team lead in the data engineering space. Engineering manager experience would be an added advantage. Atleast 2 years of experience developing data solutions using any data engineering methods. Has working knowledge of SQL Worked in a startup or fast product development environment with frugality and some degree of ambiguity. B. Tech must-have, MBA would be good to have. Proven track record of delivering enterprise level ETL / Data-warehouse specific products/projects. At least 2 years experience in running either AWS/ GCP or Azure data projects. Databricks knowledge would be an added advantage 12 to 19 years of experience in the software industry. Mandatory Skills Project Management, Data Warehouse, Data Lake, Analytics, Cloud Platform Desirable Skills Programming, Architecture, Solutioning, Design and DevOps. Healthcare projects background people preferred. Directly send me your resume with below details on netra.prakash@tredence.com Current CTC- Expected CTC- Notice period-
Posted 2 weeks ago
6.0 - 7.0 years
14 - 18 Lacs
Bengaluru
Work from Office
As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total Exp-6-7 Yrs (Relevant-4-5 Yrs) Mandatory Skills: Azure Databricks, Python/PySpark, SQL, Github, - Azure Devops - Azure Blob Ability to use programming languages like Java, Python, Scala, etc., to build pipelines to extract and transform data from a repository to a data consumer Ability to use Extract, Transform, and Load (ETL) tools and/or data integration, or federation tools to prepare and transform data as needed. Ability to use leading edge tools such as Linux, SQL, Python, Spark, Hadoop and Java Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences
Posted 2 weeks ago
2.0 - 5.0 years
4 - 8 Lacs
Pune
Work from Office
The ability to be a team player The ability and skill to train other people in procedural and technical topics Strong communication and collaboration skills Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Able to write complex SQL queries ; Having experience in Azure Databricks Preferred technical and professional experience Excellent communication and stakeholder management skills
Posted 2 weeks ago
3.0 - 8.0 years
9 - 19 Lacs
Bengaluru
Hybrid
Data engineers: Help optimize the workflows handling millions of images with a minimum cost. •Every bit of optimization when scaled for million images we have lot of money to save. •You must be extremely good at programming for cloud workflows with a strong eye on optimization considering the scale Comptencies : First thing, belief that “right cost is everything for you” = optimized but fast on cloud •An attitude to make data flow through the workflows on cloud most efficiently, •Right cost can even mean – find the best of storage solution to retrieve cheap and fast •Extremely good at software development to not brag about language proficiency but rather I can learn ‘assembly language’ also if required but for now python mostly
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As a Microsoft Fabric Professional at YASH Technologies, you will be responsible for working with cutting-edge technologies to bring about real positive changes in an increasingly virtual world. You will have the opportunity to contribute to business transformation by leveraging your experience in Azure Fabric, Azure Data factory, Azure Databricks, Azure Synapse, Azure Storage Services, Azure SQL, ETL, Azure Cosmos DB, Event HUB, Azure Data Catalog, Azure Functions, and Azure Purview. With 5-8 years of experience in Microsoft Cloud solutions, you will be involved in creating pipelines, datasets, dataflows, Integration runtimes, and monitoring Pipelines. Your role will also entail extracting, transforming, and loading data from source systems using Azure Databricks, as well as preparing DB Design Documents based on client requirements. Collaborating with the development team, you will create database structures, queries, and triggers while working on SQL scripts and Synapse pipelines for data migration to Azure SQL. Your responsibilities will include data migration pipeline to Azure cloud, database migration from on-prem SQL server to Azure Dev Environment, and implementing data governance in Azure. Additionally, you will work on data migration pipelines for on-prem SQL server data to Azure cloud, along with utilizing Azure data catalog and experience in Big Data Batch Processing Solutions, Interactive Processing Solutions, and Real-Time Processing Solutions. To excel in this role, mandatory certifications are required. At YASH Technologies, you will have the opportunity to create a career path tailored to your aspirations within an inclusive team environment. Our Hyperlearning workplace is built on principles of flexible work arrangements, free spirit, emotional positivity, agile self-determination, trust, transparency, open collaboration, support for business goals realization, stable employment, and ethical corporate culture. Join us to embark on a journey of continuous learning, unlearning, and relearning in a dynamic and evolving technology landscape.,
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
hyderabad, telangana
On-site
You are a skilled Architect specializing in AIOps & MLOps Operations, responsible for supporting and enhancing the automation, scalability, and reliability of AI/ML operations across the enterprise. Your role involves deploying AI/ML models, ensuring continuous monitoring, and implementing self-healing automation to enhance system performance, minimize downtime, and improve decision-making with real-time AI-driven insights. Supporting and maintaining AIOps and MLOps programs is a key responsibility, ensuring alignment with business objectives, data governance standards, and enterprise data strategy. You will assist in implementing real-time data observability, monitoring, and automation frameworks to enhance data reliability, quality, and operational efficiency. Your role will also involve contributing to the development of governance models and execution roadmaps, driving efficiency across data platforms such as Azure, AWS, GCP, and on-prem environments. It is essential to ensure seamless integration of CI/CD pipelines, data pipeline automation, and self-healing capabilities across the enterprise. Collaboration with cross-functional teams to support the development and enhancement of next-generation Data & Analytics (D&A) platforms will be part of your responsibilities. Additionally, you will assist in managing the people, processes, and technology involved in sustaining Data & Analytics platforms, driving operational excellence and continuous improvement. Moreover, you will support Data & Analytics Technology Transformations by ensuring proactive issue identification and the automation of self-healing capabilities across the PepsiCo Data Estate. Your role will involve implementing AIOps strategies for automating IT operations using Azure Monitor, Azure Log Analytics, and AI-driven alerting. You will deploy Azure-based observability solutions to enhance real-time system performance monitoring and enable AI-driven anomaly detection and root cause analysis. Contribution to developing self-healing and auto-remediation mechanisms using Azure Logic Apps, Azure Functions, and Power Automate will be part of your responsibilities. Supporting ML lifecycle automation using Azure ML, Azure DevOps, and Azure Pipelines for CI/CD of ML models is also essential. You will assist in deploying scalable ML models with Azure Kubernetes Service, Azure Machine Learning Compute, and Azure Container Instances while automating feature engineering, model versioning, and drift detection. Collaboration with various teams to align AIOps/MLOps strategies with enterprise IT goals is an important aspect of the role. You will work closely with business stakeholders and IT leadership to implement AI-driven insights and automation for enhancing operational decision-making. Tracking and reporting AI/ML operational KPIs and ensuring adherence to Azure Information Protection and data security policies will also be part of your responsibilities. In summary, your role as an Architect - AIOps & MLOps Operations will involve supporting, enhancing, and automating AI/ML operations across the enterprise, ensuring operational excellence, and continuous improvement.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
As an Engineer, IT Data at American Airlines, you will be part of a diverse, high-performing team dedicated to technical excellence. Your primary focus will be on delivering unrivaled digital products that drive a more reliable and profitable airline. The Data Domain you will work in encompasses managing and leveraging data as a strategic asset, including data management, storage, integration, and governance. This domain also involves Machine Learning, AI, Data Science, and Business Intelligence. In this role, you will collaborate closely with source data application teams and product owners to design, implement, and support analytics solutions that provide insights to make better decisions. You will be responsible for implementing data migration and data engineering solutions using Azure products and services such as Azure Data Lake Storage, Azure Data Factory, Azure Functions, Event Hub, Azure Stream Analytics, Azure Databricks, among others, as well as traditional data warehouse tools. Your tasks will span multiple aspects of the development lifecycle, including design, cloud engineering (Infrastructure, network, security, and administration), ingestion, preparation, data modeling, testing, CICD pipelines, performance tuning, deployments, consumption, BI, alerting, and prod support. Furthermore, you will provide technical leadership within a team environment and work independently. As part of a DevOps team, you will completely own and support the product, implementing batch and streaming data pipelines using cloud technologies. Your responsibilities will also include leading the development of coding standards, best practices, and privacy and security guidelines, as well as mentoring others on technical and domain skills to create multi-functional teams. For success in this role, you will need a Bachelor's degree in Computer Science, Computer Engineering, Technology, Information Systems (CIS/MIS), Engineering, or a related technical discipline, or equivalent experience/training. You should have at least 3 years of software solution development experience using agile, DevOps, operating in a product model, as well as 3 years of data analytics experience using SQL. Additionally, a minimum of 3 years of cloud development and data lake experience, preferably in Microsoft Azure, is required. Preferred qualifications include 5+ years of software solution development experience using agile, dev ops, a product model, and 5+ years of data analytics experience using SQL. Experience in full-stack development, preferably in Azure, and familiarity with Teradata Vantage development and administration are also preferred. Airline industry experience is a plus. In terms of skills, licenses, and certifications, you should have expertise with the Azure Technology stack for data management, data ingestion, capture, processing, curation, and creating consumption layers. An Azure Development Track Certification and Spark Certification are preferred. Proficiency in several tools/platforms such as Python, Spark, Unix, SQL, Teradata, Cassandra, MongoDB, Oracle, SQL Server, ADLS, Snowflake, and more is required. Additionally, experience with Azure Cloud Technologies, CI/CD tools, BI Analytics Tool Stack, and Data Governance and Privacy tools is beneficial for this role.,
Posted 2 weeks ago
1.0 - 5.0 years
0 Lacs
karnataka
On-site
At PwC, our audit and assurance team focuses on providing independent and objective assessments of financial statements, internal controls, and other assurable information to enhance credibility and reliability with stakeholders. We evaluate compliance with regulations, assess governance and risk management processes, and related controls. Those in data, analytics, and technology solutions assist clients in developing solutions that build trust, drive improvement, and detect, monitor, and predict risk. Your work involves using advanced analytics, data wrangling technology, and automation tools to leverage data and establish processes for clients to make efficient decisions based on accurate information. In a fast-paced environment, you are expected to adapt to working with various clients and team members, each presenting unique challenges. Every experience is an opportunity to learn and grow, taking ownership to consistently deliver quality work that drives value for clients and success as a team. By navigating through the Firm, you build a brand for yourself, opening doors to more opportunities. To lead and deliver value at this level, skills, knowledge, and experiences such as applying a learning mindset, appreciating diverse perspectives, sustaining high performance, active listening, seeking feedback, analyzing facts, and developing commercial awareness are essential. You are also expected to learn and apply professional and technical standards, uphold the Firm's code of conduct, and independence requirements. PricewaterhouseCoopers Acceleration Centre (Kolkata) Private Limited is a joint venture in India among PwC Network members, leveraging scale and capabilities. As a Revenue Automation Associate, you will play a critical role in supporting clients by ensuring compliance with accounting standards, implementing revenue recognition systems, optimizing processes, and driving collaboration to achieve business objectives. Working as part of a team of problem solvers, you will help clients solve complex business issues. Preferred skills include good knowledge of revenue recognition principles and accounting standards, understanding business processes related to revenue recognition, strong analytical and communication skills, and experience with data management and analytics. Proficiency in MS-SQL, ACL, Microsoft Excel, PowerPoint, and experience with Revenue Management systems, Alteryx, SQL, and Microsoft Visio is essential. Education requirements include a Bachelor's degree in Accounting and Information Systems or a related field, with 1+ years of experience in relevant roles focusing on revenue recognition, preferably in a public accounting firm or large corporation. Additional certifications like CPA are beneficial.,
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
hyderabad, telangana
On-site
YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. At YASH, you will be part of a team of highly skilled professionals working with cutting-edge technologies. Our mission is centered around making real positive changes in an increasingly virtual world, transcending generational gaps and future disruptions. We are currently seeking Azure Databricks Professionals with 6-8 years of experience. The ideal candidate should possess hands-on experience with Azure services and Databricks, along with a strong understanding of medallion architecture. Proficiency in Python and Pyspark is required for this role. Working at YASH means having the opportunity to build a career that aligns with your goals, within a collaborative and inclusive team environment. We prioritize career-oriented skilling models and utilize technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our workplace culture is built upon the following principles: - Flexible work arrangements, Free spirit, and emotional positivity - Agile self-determination, trust, transparency, and open collaboration - Comprehensive support for achieving business goals - Stable employment with a positive atmosphere and ethical corporate culture Join YASH Technologies to be part of a dynamic team driving innovation and transformation in the technology industry.,
Posted 2 weeks ago
7.0 - 11.0 years
0 Lacs
coimbatore, tamil nadu
On-site
As a Data Engineer specializing in supply chain applications at NovintiX in Coimbatore, India, you will play a crucial role in enhancing our Supply Chain Analytics team. Your primary focus will be on developing intelligent data solutions that drive real-world logistics, procurement, and demand planning. Your responsibilities will include: - Creating and optimizing scalable data pipelines for inventory, shipping, and procurement data - Integrating data from ERP, PLM, and external sources through the development of APIs - Designing, building, and maintaining enterprise-grade data warehouses and data lakes while ensuring data quality, integrity, and security - Collaborating with stakeholders to develop reporting dashboards using tools like Power BI, Tableau, or QlikSense - Supporting supply chain decision-making with data-driven insights - Constructing data models and algorithms for demand forecasting and logistics optimization, utilizing ML libraries and concepts - Coordinating with supply chain, logistics, and IT teams to translate technical solutions into understandable business language - Implementing robust data governance frameworks and ensuring compliance and audit readiness To qualify for this role, you should have: - 7+ years of experience in Data Engineering - A Bachelor's degree in Computer Science, IT, or a related field - Proficiency in Python, Java, SQL, Spark SQL, Hadoop, PySpark, NoSQL, Power BI, Tableau, QlikSense, Azure Data Factory, Azure Databricks, AWS - Strong collaboration and communication skills - Exposure to fast-paced, agile environments If you are passionate about leveraging data to drive supply chain efficiencies and meet business objectives, we encourage you to apply for this full-time position. Please send your resume to shanmathi.saravanan@novintix.com before the application deadline on 13/07/2025. Please note that the ability to commute or relocate to Coimbatore, Tamil Nadu, is preferred for this role, as it requires in-person work.,
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough