Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 6.0 years
18 - 22 Lacs
Pune
Work from Office
We are looking for a GenAI/ML Engineer to design, develop, and deploy cutting-edge AI/ML models and Generative AI applications . This role involves working on large-scale enterprise use cases, implementing Large Language Models (LLMs) , building Agentic AI systems , and developing data ingestion pipelines . The ideal candidate should have hands-on experience with AI/ML development , Generative AI applications , and a strong foundation in deep learning , NLP , and MLOps practices. Key Responsibilities Design, develop , and deploy AI/ML models and Generative AI applications for various enterprise use cases. Implement and integrate Large Language Models (LLMs) using frameworks such as LangChain , LlamaIndex , and RAG pipelines . Develop Agentic AI systems capable of multi-step reasoning and autonomous decision-making . Create secure and scalable data ingestion pipelines for structured and unstructured data, enabling indexing, vector search, and advanced retrieval techniques. Collaborate with cross-functional teams (Data Engineers, Product Managers, Architects) to deploy AI solutions and enhance the AI stack. Build CI/CD pipelines for ML/GenAI workflows and support end-to-end MLOps practices. Leverage Azure and Databricks for training , serving , and monitoring AI models at scale. Required Qualifications & Skills (Mandatory) 4+ years of hands-on experience in AI/ML development , including Generative AI applications . Expertise in RAG , LLMs , and Agentic AI implementations. Strong experience with LangChain , LlamaIndex , or similar LLM orchestration frameworks. Proficiency in Python and key ML/DL libraries : TensorFlow , PyTorch , Scikit-learn . Solid foundation in Deep Learning , Natural Language Processing (NLP) , and Transformer-based architectures . Experience in building data ingestion , indexing , and retrieval pipelines for real-world enterprise use cases. Hands-on experience with Azure cloud services and Databricks . Proven track record in designing CI/CD pipelines and using MLOps tools like MLflow , DVC , or Kubeflow . Soft Skills Strong problem-solving and critical thinking ability. Excellent communication skills , with the ability to explain complex AI concepts to non-technical stakeholders. Ability to collaborate effectively in agile , cross-functional teams . A growth mindset , eager to explore and learn emerging technologies. Preferred Qualifications Familiarity with vector databases such as FAISS , Pinecone , or Weaviate . Experience with AutoGPT , CrewAI , or similar agent frameworks . Exposure to Azure OpenAI , Cognitive Search , or Databricks ML tools . Understanding of AI security , responsible AI , and model governance . Role Dimensions Design and implement innovative GenAI applications to address complex business problems. Work on large-scale, complex AI solutions in collaboration with cross-functional teams. Take ownership of the end-to-end AI pipeline , from model development to deployment and monitoring. Success Measures (KPIs) Successful deployment of AI and Generative AI applications . Optimization of data pipelines and model performance at scale. Contribution to the successful adoption of AI-driven solutions within enterprise use cases. Effective collaboration with cross-functional teams, ensuring smooth deployment of AI workflows. Competency Alignment AI/ML Development : Expertise in building and deploying scalable and efficient AI models. Generative AI : Strong hands-on experience in Generative AI , LLMs , and RAG frameworks. MLOps : Proficiency in designing and maintaining CI/CD pipelines and implementing MLOps practices . Cloud Platforms : Experience with Azure and Databricks for AI model training and serving.
Posted 1 week ago
3.0 - 8.0 years
5 - 10 Lacs
Pune
Work from Office
Roles and Responsibility Design, develop, and implement scalable Kafka infrastructure solutions. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain technical documentation for Kafka infrastructure projects. Troubleshoot and resolve complex issues related to Kafka infrastructure. Ensure compliance with industry standards and best practices for Kafka infrastructure. Participate in code reviews and contribute to the improvement of the overall code quality. Job Requirements Strong understanding of Kafka architecture and design principles. Experience with Kafka tools such as Streams, KSQL, and SCADA. Proficient in programming languages such as Java, Python, or Scala. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a team environment. Strong communication and interpersonal skills.
Posted 1 week ago
3.0 - 6.0 years
8 - 13 Lacs
Bengaluru
Work from Office
Share resume to : sowmya.v@acesoftlabs.com Hiring Now: Azure Data Engineer (36 Years) – Immediate Opportunity! Location : Bangalore Experience : 3–6 Years Key Responsibilities: Design and build scalable data solutions using Azure Data Factory, Azure Databricks, and Azure Synapse . Develop ETL/ELT pipelines for structured and unstructured data from various sources and formats. Write efficient SQL queries, stored procedures, functions , and dynamic SQL for data transformation and analysis. Implement data lake architecture and optimize data ingestion using Apache Spark with Scala or Python . Handle and maintain Delta Tables within the Databricks environment. Collaborate with DevOps team to support CI/CD pipelines using Azure DevOps . Ensure data quality, performance, and availability across enterprise platforms. Skills & Qualifications: Strong hands-on experience in: Azure Data Lake , Azure Data Factory , Azure Synapse Analytics Azure Databricks , Apache Spark(Mandatory) ,Python SQL – Joins, Stored Procedures, Dynamic Queries, etc. Ingesting and processing data from diverse sources (e.g., JSON, CSV, Parquet) Good understanding of the Azure ecosystem and services . Knowledge of Azure DevOps , pipelines, and build/release processes – Good to have Exposure to Snowflake is an added advantage Excellent communication skills and strong learning aptitude
Posted 1 week ago
8.0 - 13.0 years
15 - 25 Lacs
Bengaluru
Hybrid
Company Description Krish is committed to enabling customers to achieve their technological goals by delivering solutions that combine the right technology, people, and costs. Our approach emphasizes building long-term relationships while ensuring customer success through tailored solutions, leveraging the expertise and integrity of our consultants and robust delivery processes. Position Summary: We are seeking a highly skilled and experienced Senior Power BI Developer to join our Data & Analytics team. The ideal candidate will be responsible for designing, developing, and deploying interactive, visually appealing, and user-friendly business intelligence reports and dashboards using Power BI. This role involves close collaboration with business stakeholders, data engineers, and other technical teams to transform complex data into actionable insights that drive business decision-making. Key Responsibilities: 1. Power BI Development: Design, develop, and deploy Power BI dashboards and reports that meet business requirements. Implement row-level security (RLS), bookmarks, drill-through, and advanced visualization features in Power BI. Optimize Power BI reports for performance, responsiveness, and usability. Provide training, documentation, and support to end-users and team members on Power BI functionalities and best practices 2. Data Modelling and ETL: Develop data models using Power BI Desktop, including relationships, DAX measures, and calculated columns. Work with data engineers to design and integrate data pipelines that feed into Power BI from various sources (SQL, Azure, Excel, etc.). Ensure data accuracy, integrity, and quality in reports. Optimize data models for performance, scalability, and maintainability, considering best practices in schema design (e.g., star schema, relationships, cardinality 3. Requirements Gathering & Stakeholder Management: Collaborate with business users to gather requirements and translate them into effective Power BI solutions. Provide training and support to business users on Power BI usage and best practices. Communicate project status, risks, and issues to management and stakeholders. 4. Advanced Analytics and Visualization: Implement advanced DAX calculations and measures to support complex business logic. Develop custom visuals and integrate R/Python scripts in Power BI for enhanced analytics if needed. Create interactive dashboards and reports that drive actionable business insights. 5. Governance and Best Practices: Establish and enforce Power BI development standards, data governance, and best practices. Document dashboards, data models, and processes for maintainability and knowledge sharing. Stay updated with the latest Power BI features, releases, and trends to continuously improve solutions. Required Qualifications & Skills: Education: Bachelors degree in Computer Science, Information Systems, Data Analytics, or a related field. Masters degree preferred. Experience: Minimum 5 years of experience in BI development with at least 3 years of hands-on experience in Power BI. Proven track record of delivering complex BI solutions in a fast-paced environment. Technical Skills: Strong proficiency in Power BI Desktop and Power BI Service. Deep understanding of DAX, Power Query (M), and data modelling principles. Strong understanding of data modeling concepts, relationships, and best practices (e.g., star schema, normalization, cardinality) Solid experience in SQL and relational database systems (e.g., MS SQL Server including SSIS SSRS etc.,, Azure SQL, Oracle). Knowledge of integrating Power BI with different data sources including Azure Data Lake, Data Warehouse, Excel, APIs. Familiarity with Git or other version control systems is a plus. Knowledge of Azure (Data Factory, Synapse, Databricks) is a plus. Soft Skills: Excellent communication, presentation, and interpersonal skills. Strong analytical and problem-solving abilities. Ability to work independently and collaboratively in cross-functional teams. Attention to detail and a passion for delivering high-quality BI solutions. Nice to Have: Experience with Power Platform (Power Apps, Power Automate). Knowledge of data warehousing concepts and star/snowflake schema modelling. Certification in Power BI or Microsoft Azure Data certifications. Experience with data warehousing concepts and tools. Familiarity with programming languages such as Python or R for data manipulation.
Posted 1 week ago
5.0 - 10.0 years
15 - 22 Lacs
Pune
Remote
Role & responsibilities : Looking for a skilled Data Engineer with expertise in Python and Azure Databricks for building scalable data pipelines. Must have strong SQL skills for designing, querying, and optimizing relational databases. Responsible for data ingestion, transformation, and orchestration across cloud platforms. Experience with coding best practices, performance tuning
Posted 1 week ago
4.0 - 5.0 years
2 - 3 Lacs
Noida, Kolkata, Bengaluru
Work from Office
Azure Data Engineer Pyspark Python Azure Data Factory Azure Databricks SQL
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
Bengaluru
Work from Office
Data Engineer 3 to 6 yrs 1. Excellent Programming skills in Python with object oriented design 2. Strong programming skills with Pyspark, SparkSQL 3. Hands on experience in working with relational database like SQL, postgres database 4. Hands on experience on developing solutions on Big Data Clusters like MAPR Clusters or on cloud platform like Azure 5. Experience on working with Azure Databricks, Azure Data Factory is an added advantage 6. Deployment experience with Dockers and Kubernetes is an added advantage 7. Excellent logical , analytical and problem solving skills 8. Strong communication skills
Posted 1 week ago
8.0 - 10.0 years
20 - 30 Lacs
Gurugram
Work from Office
Seeking an experienced Azure DBA with expertise in Azure SQL, Data Factory, and Databricks. Manage and optimize databases, design pipelines, and support cloud-native compliance platforms. Required Candidate profile DBA with expertise in Microsoft Azure Data Factory and Databricks to manage, optimize, and scale our data infrastructure.
Posted 1 week ago
7.0 - 10.0 years
0 - 0 Lacs
Bengaluru
Work from Office
Job Title: Senior Data Engineer Location: Bangalore, India Experience Required: 7 to 10 Years Open Positions: 3 Notice Period: Immediate Joiners Only Job Description: We are looking for a Senior Data Engineer with strong hands-on experience in AWS and working knowledge of Azure . The ideal candidate should have deep expertise in building scalable data pipelines, working with cloud platforms, and handling large datasets. Must-Have Skills: Cloud Platforms: Strong experience in AWS and working knowledge of Azure Data Engineering: Data Warehousing, any ETL Tool Programming Languages: Python, PySpark, SQL Databases: Snowflake or Databricks, and any RDBMS Scripting/OS: Unix Shell Scripting Good to Have: Kafka MongoDB Cloudera Hadoop Interview Process: Round 1: Virtual Round 2: Face-to-Face (mandatory) Important Notes: Work Location: Bangalore (on-site) Apply only if you : Are currently in Bangalore Can attend Face-to-Face interview Have 7-10 years of experience Have strong AWS experience and Azure knowledge Can join immediatelely Send Profiles to narasimha@nam-it.com Thanks & regards, Narasimha.B Staffing executive NAM Info Pvt Ltd, 29/2B-01, 1st Floor, K.R. Road, Banashankari 2nd Stage, Bangalore - 560070. +91 9182480146 (India) Email - narasimha@nam-it.com
Posted 1 week ago
7.0 - 12.0 years
20 - 35 Lacs
Bengaluru
Work from Office
Greetings from Mastek, We have an exciting opportunity for you. We are seeking a Senior Data Engineer who has solid experience in Snowflake, Microsoft Azure Data Factory and Kafka. This is a hybrid role and Job location is Bangalore. Experience : 7+ years Job Location : Bangalore Mode : Hybrid Qualifications BS in Computer Science or Related 5+ years of data engineering experience Good understanding of modern data platforms including data lakes and data warehouse, with good knowledge of the underlying architecture, preferably in Snowflake. Proven experience in assembling large, complex sets of data that meets non-functional and functional business requirements. Experience in identifying, designing, and implementing integration, modelling, and orchestration of complex Finance data and at the same time look for process improvements, optimize data delivery and automate manual processes. Working experience of scripting, data science and analytics (SQL, Python, PowerShell, JavaScript) Working experience of performance tuning and optimization, bottleneck problems analysis, and technical troubleshooting in a, sometimes, ambiguous environment. Working experience of working with cloud-based systems (Azure experience preferred) Desired Qualifications: Experience working with cloud-based systems - Azure & Snowflake data warehouses. Expertise in designing data table structures, reports, and queries. Working knowledge of CI/CD Working knowledge of building data integrity checks as part of delivery of applications Experience working with Kafka technologies. Technical expertise to build code that is performant as well as secure. Technical depth and vision to perform POCs and evaluate different technologies. Experience with Real Time Analytics and Real Time Messaging Working experience with Microservices is desirable. Design, implement and monitor 'best practices' for Dev framework. Experience working with large volume data; retail experience strongly desired. MS Degree in Computer Science or related technical degree completed. Possesses an entrepreneurial spirit and continuously innovates to achieve great results. Communicates with honesty & kindness and creates the space for others to do the same. Fosters connection by putting people first and building trusting relationships. Integrates fun and joy as a way of being and working, aka doesnt take themselves too seriously. Preferred Tools: Snowflake, Microsoft Azure Data Factory, Kafka, Oracle Exadata, Power BI, SSAS, Oracle Data Integrator You can apply here.
Posted 1 week ago
5.0 - 8.0 years
8 - 17 Lacs
Hyderabad, Bengaluru
Work from Office
Interested candidates can share your resume to aweaz.pasha@wisseninfotech.com JD: Azure Data Engineer Experience in Azure Data Factory, Databricks, Azure data lake and Azure SQL Server. Developed ETL/ELT process using SSIS and/or Azure Data Factory. Build complex pipelines & dataflows using Azure Data Factory. Designing and implementing data pipelines using in Azure Data Factory (ADF). Improve functionality/ performance of existing data pipelines. Performance tuning processes dealing with very large data sets. Configuration and Deployment of ADF packages. Proficient of the usage of ARM Template, Key Vault, Integration runtime. Adaptable to work with ETL frameworks and standards. Strong analytical and troubleshooting skill to root cause issue and find solution. Propose innovative, feasible and best solutions for the business requirements. Knowledge on Azure technologies / services such as Blob storage, ADLS, Logic Apps, Azure SQL, Web Jobs.. Expert in Service now , Incidents ,JIRA. Should have exposure agile methodology. Expert in understanding , building powerBI reports using latest methodologies
Posted 1 week ago
1.0 - 5.0 years
0 Lacs
hyderabad, telangana
On-site
As a member of the Providence team, you will be part of one of the largest not-for-profit healthcare systems in the US, dedicated to providing high-quality and compassionate healthcare to all individuals. Our vision, "Health for a better world," drives us to ensure that health is a fundamental human right. With a network of 51 hospitals, 1,000+ care clinics, senior services, supportive housing, and various health and educational services, we aim to offer affordable quality care and services to everyone. Providence India is spearheading a transformative shift in the healthcare ecosystem towards Health 2.0. With a focus on healthcare technology and innovation, our India center will play a crucial role in driving the digital transformation of health systems to enhance patient outcomes, caregiver efficiency, and the overall business operations of Providence on a large scale. Joining our Technology Engineering and Ops (TEO) team, you will contribute to the foundational infrastructure that enables our caregivers, patients, physicians, and community technology partners to fulfill our mission. Your role will involve working on Azure Services such as Azure Data Factory, Azure Databricks, and Log Analytics to integrate structured and unstructured data from multiple systems, develop reporting solutions for TEO platforms, build visualizations with enterprise-level datasets, and collaborate with data engineers and service line owners to manage report curation and data modeling. In addition to your responsibilities, you will contribute to the creation of a centralized enterprise warehouse for all infrastructure and networking data, implement API Governance and Standards for Power BI Reporting, and build SharePoint intake forms with power automate and bi-directional ADO integration. You will work closely with service teams to determine project engagement and priority, create scripts to support enhancements, collaborate with external vendors on API integrations, and ensure data analysis and integrity with the PBI DWH and Reporting teams. To excel in this role, we are looking for individuals with a minimum of 2+ years of experience in BI Development and Data Engineering, along with 1+ years of experience in Azure cloud technologies. Proficiency in Power BI, Azure Data Factory, Azure Databricks, Synapse, complex SQL code writing, cloud-native deployment with CI/CD Pipelines, troubleshooting skills, effective communication, and a Bachelor's Degree in Computer Science, Business Management, or IS are essential. If you thrive in an Agile environment, possess excellent interpersonal skills, and are ready to contribute to the future of healthcare, we encourage you to apply and be a part of our mission towards "Health for a better world.",
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As a Cloud Architect with expertise in Azure and Snowflake, you will be responsible for designing and implementing secure, scalable, and highly available cloud-based solutions on AWS and Azure Cloud. Your role will involve utilizing your experience in Azure Databricks, ADF, Azure Synapse, PySpark, and Snowflake Services. Additionally, you will participate in pre-sales activities, including RFP and proposal writing. Your experience with integrating various data sources with Data Warehouse and Data Lake will be crucial for this role. You will also be expected to create Data warehouses and data lakes for Reporting, AI, and Machine Learning purposes, while having a solid understanding of data modelling and data architecture concepts. Collaboration with clients to comprehend their business requirements and translating them into technical solutions that leverage Snowflake and Azure cloud platforms will be a key aspect of your responsibilities. Furthermore, you will be required to clearly articulate the advantages and disadvantages of different technologies and platforms, as well as participate in Proposal and Capability presentations. Defining and implementing cloud governance and best practices, identifying and implementing automation opportunities for increased operational efficiency, and conducting knowledge sharing and training sessions to educate clients and internal teams on cloud technologies are additional duties associated with this role. Your expertise will play a vital role in ensuring the success of cloud projects and the satisfaction of clients.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
indore, madhya pradesh
On-site
As a Lead Data Engineer at our company located in Indore, Madhya Pradesh, you will be an integral part of our dynamic team, bringing your extensive experience and expertise in PySpark, Azure Data Bricks, Azure Data Factory, SQL, and Power BI to the table. Your primary responsibility will involve designing, constructing, and managing scalable data solutions that are crucial for our various projects. Your role will encompass data architecture, pipeline development, and ensuring data governance across different initiatives. Your key responsibilities will include leading the creation and enhancement of robust data pipelines utilizing Azure Data Factory and PySpark, building scalable data solutions and workflows within Azure Databricks, writing efficient SQL queries for data manipulation, and developing and maintaining dashboards and reports using Power BI. You will also be tasked with upholding best practices in data security, governance, and compliance, collaborating with diverse teams, mentoring junior engineers, and monitoring data pipelines for optimal performance. To excel in this role, you must possess a minimum of 5 years of hands-on experience in PySpark, Azure Data Factory, Azure Databricks, SQL, and at least 2 years of experience in Power BI. Additionally, you should have a solid understanding of ETL/ELT processes, modern data architecture, and expertise in handling large-scale structured and semi-structured datasets. Exposure to Microsoft Fabric for integrated analytics and automation, knowledge of data security, governance, and privacy frameworks, as well as familiarity with DevOps practices in data engineering environments are considered advantageous. Joining our team will provide you with the opportunity to work on impactful projects with cutting-edge technologies in a fast-paced, collaborative, and learning-driven environment. You can look forward to a competitive salary and a growth path within our tech-driven organization.,
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
kochi, kerala
On-site
You should have 8-12 years of experience in a Data Engineer role, with at least 3 years as an Azure data engineer. A bachelor's degree in Computer Science, Information Technology, Engineering, or a related field is required. You must be proficient in Python, SQL, and have a deep understanding of PySpark. Additionally, expertise in Databricks or similar big data solutions is necessary. Strong knowledge of ETL/ELT frameworks, data structures, and software architecture is expected. You should have proven experience in designing and deploying high-performance data processing systems and extensive experience with Azure cloud data platforms. As a Data Engineer, your responsibilities will include designing, constructing, installing, testing, and maintaining highly scalable and robust data management systems. You will apply data warehousing concepts to design and implement data warehouse tables in line with business requirements. Building complex ETL/ELT processes for large-scale data migration and transformation across platforms and Enterprise systems such as Oracle ERP, ERP Fusion, and Salesforce is essential. You must have the ability and expertise to extract data from various sources like APIs, JSONs, and Databases. Utilizing PySpark and Databricks within the Azure ecosystem to manipulate large datasets, improve performance, and enhance scalability of data operations will be a part of your role. Developing and implementing Azure-based data architectures consistent across multiple projects while adhering to best practices and standards is required. Leading initiatives for data integrity and normalization within Azure data storage and processing environments is expected. You will evaluate and optimize Azure-based database systems for performance efficiency, reusability, reliability, and scalability. Additionally, troubleshooting complex data-related issues within Azure and providing expert guidance and support to the team is necessary. Ensuring all data processes adhere to governance, data security, and privacy regulations is also a critical part of the role.,
Posted 1 week ago
1.0 - 5.0 years
0 Lacs
bihar
On-site
Microsoft Silicon, Cloud Hardware, and Infrastructure Engineering (SCHIE) is the team responsible for powering Microsoft's expanding Cloud Infrastructure and driving the Intelligent Cloud mission. SCHIE plays a crucial role in delivering core infrastructure and foundational technologies for over 200 online businesses globally, including Bing, MSN, Office 365, Xbox Live, Teams, OneDrive, and the Microsoft Azure platform. As part of our team, you will contribute to server and data center infrastructure, security and compliance, operations, globalization, and manageability solutions. We are committed to smart growth, high efficiency, and providing a trusted experience to our customers and partners worldwide. We are seeking passionate and high-energy engineers to join us on this mission. We are currently looking for a motivated software engineer with a strong interest in cloud-scale distributed systems to work on building and maintaining cloud services and software stacks. The primary focus will be on monitoring and managing cloud hardware, ensuring the health, performance, and availability of the cloud infrastructure. This role offers the opportunity to be part of a dynamic team at the forefront of innovation within Microsoft, contributing to the development of cutting-edge hardware solutions that power Azure and enhance our cloud infrastructure. Responsibilities: - Design, develop, and operate large-scale, low-latency, high-throughput cloud services. - Monitor, diagnose, and repair service health and performance in production environments. - Conduct A/B testing and analysis, establish baseline metrics, set incremental targets, and validate against those targets continuously. - Utilize AI/copilot tooling for development and operational efficiency, driving improvements to meet individual and team-level goals. - Perform data analysis using various analytical tools and interpret results to provide actionable recommendations. - Define and measure the impact of requested analytics and reporting features through quantitative measures. - Collaborate with internal peer teams and external partners to ensure highly available, secure, accurate, and actionable results based on hardware health signals, policies, and predictive analytics. Qualifications: - Academic qualifications: B.S. in Computer Science, M.S. in Computer Science, or Ph.D. in Computer Science or Electrical Engineering with at least 1 year of development experience. - Proficiency in programming languages such as C# or other Object-oriented languages, C, Python, and scripting languages. - Strong understanding of Computer Science fundamentals including algorithms, data structures, object-oriented design, multi-threading, and distributed systems. - Excellent problem-solving and design skills with a focus on quality, performance, and service excellence. - Effective communication skills for collaboration and customer/partner correspondence. - Experience with Azure services and database query languages like SQL/kusto is desired but optional. - Familiarity with AI copilot tooling and basic knowledge of LLM models and RAG is desired but optional. Join us in shaping the future of cloud infrastructure and be part of an exciting and innovative team at Microsoft SCHIE. #azurehwjobs #CHIE #HHS,
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
maharashtra
On-site
As a Cloud & AI Solution Engineer at Microsoft, you will be part of a dynamic team that is at the forefront of innovation in the realm of databases and analytics. Your role will involve working on cutting-edge projects that leverage the latest technologies to drive meaningful impact for commercial customers. If you are insatiably curious and deeply passionate about tackling complex challenges in the era of AI, this is the perfect opportunity for you. In this role, you will play a pivotal role in helping enterprises unlock the full potential of Microsoft's cloud database and analytics stack. You will collaborate closely with engineering leaders and platform teams to accelerate the Fabric Data Platform, including Azure Databases and Analytics. Your responsibilities will include hands-on engagements such as Proof of Concepts, hackathons, and architecture workshops to guide customers through secure, scalable solution design and accelerate database and analytics migration into their deployment workflows. To excel in this position, you should have at least 10+ years of technical pre-sales or technical consulting experience, or a Bachelor's/Master's Degree in Computer Science or related field with 4+ years of technical pre-sales experience. You should be an expert on Azure Databases (SQL DB, Cosmos DB, PostgreSQL) and Azure Analytics (Fabric, Azure Databricks, Purview), as well as competitors in the data warehouse, data lake, big data, and analytics space. Additionally, you should have experience with cloud and hybrid infrastructure, architecture designs, migrations, and technology management. As a trusted technical advisor, you will guide customers through solution design, influence technical decisions, and help them modernize their data platform to realize the full value of Microsoft's platform. You will drive technical sales, lead hands-on engagements, build trusted relationships with platform leads, and maintain deep expertise in Microsoft's Analytics Portfolio and Azure Databases. By joining our team, you will have the opportunity to accelerate your career growth, develop deep business acumen, and hone your technical skills. You will be part of a collaborative and creative team that thrives on continuous learning and flexible work opportunities. If you are ready to take on this exciting challenge and be part of a team that is shaping the future of cloud Database & Analytics, we invite you to apply and join us on this journey.,
Posted 1 week ago
1.0 - 5.0 years
0 Lacs
hyderabad, telangana
On-site
As an integral part of American Airlines Tech Hub in Hyderabad, India, you will have the opportunity to contribute to the innovative and tech-driven environment that shapes the future of travel. Your role will involve collaborating with source data application teams and product owners to develop and support analytics solutions that provide valuable insights for informed decision-making. By leveraging Azure products and services such as Azure Data Lake Storage, Azure Data Factory, and Azure Databricks, you will be responsible for implementing data migration and engineering solutions to enhance the airline's digital capabilities. Your responsibilities will encompass various aspects of the development lifecycle, including design, cloud engineering, data modeling, testing, performance tuning, and deployment. Working within a DevOps team, you will have the chance to take ownership of your product and contribute to the development of batch and streaming data pipelines using cloud technologies. Adherence to coding standards, best practices, and security guidelines will be crucial as you collaborate with a multidisciplinary team to deliver technical solutions effectively. To excel in this role, you should have a Bachelor's degree in a relevant technical discipline or equivalent experience, along with a minimum of 1 year of software solution development experience using agile methodologies. Proficiency in SQL for data analytics and prior experience with cloud development, particularly in Microsoft Azure, will be advantageous. Preferred qualifications include additional years of software development and data analytics experience, as well as familiarity with tools such as Azure EventHub, Azure Power BI, and Teradata Vantage. Your success in this position will be further enhanced by expertise in the Azure Technology stack, practical knowledge of Azure cloud services, and relevant certifications such as Azure Development Track and Spark Certification. A combination of development, administration, and support experience in various tools and platforms, including scripting languages, data platforms, and BI analytics tools, will be beneficial for your role in driving data management and governance initiatives within the organization. Effective communication skills, both verbal and written, will be essential for engaging with stakeholders across different levels of the organization. Additionally, your physical abilities should enable you to perform the essential functions of the role safely and successfully, with or without reasonable accommodations as required by law. At American Airlines, diversity and inclusion are integral to our workforce, fostering an inclusive environment where employees can thrive and contribute to the airline's success. Join us at American Airlines and embark on a journey where your technical expertise and innovative spirit will play a pivotal role in shaping the future of travel. Feel free to be yourself as you contribute to the seamless operation of the world's largest airline, caring for people on life's journey.,
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
kolkata, west bengal
On-site
You must have knowledge in Azure Datalake, Azure function, Azure Databricks, Azure Data Factory, and PostgreSQL. Working knowledge in Azure DevOps and Git flow would be an added advantage. Alternatively, you should have working knowledge in AWS Kinesis, AWS EMR, AWS Glue, AWS RDS, AWS Athena, and AWS RedShift. Demonstrable expertise in working with timeseries data is essential. Experience in delivering data engineering/data science projects in Industry 4.0 is an added advantage. Knowledge of Palantir is required. You must possess strong problem-solving skills with a focus on sustainable and reusable development. Proficiency in using statistical computer languages like Python/PySpark, Pandas, Numpy, seaborn/matplotlib is necessary. Knowledge in Streamlit.io is a plus. Familiarity with Scala, GoLang, Java, and big data tools such as Hadoop, Spark, Kafka is beneficial. Experience with relational databases like Microsoft SQL Server, MySQL, PostGreSQL, Oracle, and NoSQL databases including Hadoop, Cassandra, MongoDB is expected. Proficiency in data pipeline and workflow management tools like Azkaban, Luigi, Airflow is required. Experience in building and optimizing big data pipelines, architectures, and data sets is crucial. You should possess strong analytical skills related to working with unstructured datasets. Provide innovative solutions to data engineering problems, document technology choices, and integration patterns. Apply best practices for project delivery with clean code. Demonstrate innovation and proactiveness in meeting project requirements. Reporting to: Director- Intelligent Insights and Data Strategy Travel: Must be willing to be deployed at client locations worldwide for long and short terms, flexible for shorter durations within India and abroad.,
Posted 1 week ago
5.0 - 8.0 years
18 - 30 Lacs
Bengaluru
Work from Office
Build, learn analytics solutions & platforms, data lakes, platforms, fabric solutions, etc Big Data, Cloud tech data ingestion pipelines • Conceptualize, make data processing harmonization. • Schedule, orchestrate, validate pipelines. Required Candidate profile Have similar work exp, work in a fast pace enviroenment, keen to learn new technologies
Posted 1 week ago
3.0 - 6.0 years
8 - 13 Lacs
Bengaluru
Work from Office
Share resume to : sowmya.v@acesoftlabs.com Hiring: Data Engineer (36 Years Experience) External Position (Z2 Grade) Location : Bangalore Experience : 3–6 years Key Technologies : Azure Databricks, Snowflake, DBT, Azure Data Factory, Hadoop, Spark, SQL Role Overview We are looking for a skilled Data Engineer to design, build, and maintain scalable data pipelines and cloud data infrastructure. You'll play a key role in enabling data accessibility, quality, and insights across the organization using tools like Azure Databricks , Snowflake , and DBT . Key Responsibilities Develop and optimize scalable data pipelines using Azure Databricks , Snowflake , and DBT . Build and manage efficient data models and architecture within Snowflake . Implement transformation logic in DBT to standardize data for analytics. Monitor and troubleshoot data pipeline issues; ensure performance and reliability. Collaborate with analysts, data scientists, and stakeholders for data-driven projects. Work with orchestration tools like Azure Data Factory and CI/CD tools like Azure DevOps , GitHub , or Jenkins . Required Qualifications Bachelor’s in Computer Science, Data Engineering, or a related field. Strong experience in Azure Databricks , Snowflake , DBT , and SQL . Proficient in data modeling and transformation best practices. Solid knowledge of Big Data frameworks : Hadoop, Spark, Hive. Hands-on with Azure Data Factory and scripting in Scala/Python . Understanding of Hadoop architecture and data lake solutions. Exposure to CI/CD pipelines using tools like Azure DevOps , GitHub , Jenkins is a plus.
Posted 1 week ago
5.0 - 10.0 years
6 - 8 Lacs
Hyderabad
Remote
Job Title : Senior-Level Data Engineer Healthcare Domain Location: Remote Option Experience: 5+ Years Employment Type: Full-Time About the Role We are looking for a Senior Data Engineer with extensive experience in healthcare data ecosystems and Databricks-based pipelines . The ideal candidate brings deep technical expertise in building large-scale data platforms, optimizing performance, and ensuring compliance with healthcare data standards (e.g., HIPAA, EDI, HCC). This role requires the ability to lead data initiatives, mentor junior engineers, and work cross-functionally with product, analytics, and compliance teams. Key Responsibilities Architect, develop, and manage large-scale, secure, and high-performance data pipelines on Databricks using Spark , Delta Lake , and cloud-native tools . Design and implement healthcare-specific data models to support analytics, AI/ML, and operational reporting. Ingest and transform complex data types such as 837/835 claims , EHR/EMR records , provider/member files , lab results , and clinical notes . Lead data governance, quality, and security initiatives ensuring compliance with HIPAA , HITECH , and organizational policies. Collaborate with cross-functional stakeholders to understand data needs and provide robust, scalable solutions. Mentor junior and mid-level engineers, performing code reviews and technical guidance. Identify performance bottlenecks and implement optimizations in Spark jobs and SQL transformations. Own and evolve best practices for CI/CD , version control, and deployment automation. Stay up to date with industry standards (e.g., FHIR , HL7 , OMOP ) and evaluate new tools/technologies. Required Qualifications 5+ years of experience in data engineering, with 3+ years in healthcare or life sciences domain. Deep expertise with Databricks , Scala , Apache Spark (preferably PySpark) , and Delta Lake . Proficiency in SQL , Python , and data modeling (dimensional/star schema, normalized models). Strong command over 837/835 EDI formats , CPT/ICD-10/DRG/HCC coding , and data regulatory frameworks. Experience with cloud platforms such as Azure , AWS , or GCP and cloud-native data services (e.g., S3, ADLS, Glue, Data Factory). Familiarity with orchestration tools like Airflow , dbt , or Azure Data Factory . Proven ability to work in agile environments , manage stakeholder expectations, and deliver end-to-end data products. Experience implementing monitoring, observability , and alerting for data pipelines. Strong written and verbal communication skills for both technical and non-technical audiences. Education Bachelors degree in Business Administration, Healthcare Informatics, Information Systems, or a related field.
Posted 1 week ago
5.0 - 9.0 years
15 - 25 Lacs
Pune, Chennai, Bengaluru
Hybrid
Databricks Developer Primary Skill : Azure data factory, Azure databricks Secondary Skill: SQL,,Sqoop,Hadoop Experience: 5 to 9 years Location: Chennai, Bangalore ,Pune, Coimbatore Requirements: Cloud certified in one of these categories Azure Data Engineer Azure Data Factory , Azure Data bricks Spark (PySpark or scala), SQL, DATA Ingestion, Curation Semantic Modelling/ Optimization of data model to work within Rahona Experience in Azure ingestion from on-prem source, e.g. mainframe, SQL server, Oracle. Experience in Sqoop / Hadoop Microsoft Excel (for metadata files with requirements for ingestion) Any other certificate in Azure/AWS/GCP and data engineering hands-on experience in cloud Strong Programming skills with at least one of Python, Scala, or Java
Posted 1 week ago
6.0 - 11.0 years
14 - 19 Lacs
Bengaluru
Remote
Role: Azure Specialist-CDM Smith Location:Bangalore Mode: Remote Education and Work Experience Requirements: Key Responsibilities: Databricks Platform: Act as a subject matter expert for the Databricks platform within the Digital Capital team, provide technical guidance, best practices, and innovative solutions. Databricks Workflows and Orchestration: Design and implement complex data pipelines using Azure Data Factory or Qlik replicate. End-to-End Data Pipeline Development: Design, develop, and implement highly scalable and efficient ETL/ELT processes using Databricks notebooks (Python/Spark or SQL) and other Databricks-native tools. Delta Lake Expertise: Utilize Delta Lake for building reliable data lake architecture, implementing ACID transactions, schema enforcement, time travel, and optimizing data storage for performance. Spark Optimization: Optimize Spark jobs and queries for performance and cost efficiency within the Databricks environment. Demonstrate a deep understanding of Spark architecture, partitioning, caching, and shuffle operations. Data Governance and Security: Implement and enforce data governance policies, access controls, and security measures within the Databricks environment using Unity Catalog and other Databricks security features. Collaborative Development: Work closely with data scientists, data analysts, and business stakeholders to understand data requirements and translate them into Databricks based data solutions. Monitoring and Troubleshooting: Establish and maintain monitoring, alerting, and logging for Databricks jobs and clusters, proactively identifying and resolving data pipeline issues. Code Quality and Best Practices: Champion best practices for Databricks development, including version control (Git), code reviews, testing frameworks, and documentation. Performance Tuning: Continuously identify and implement performance improvements for existing Databricks data pipelines and data models. Cloud Integration: Experience integrating Databricks with other cloud services (e.g., Azure Data Lake Storage Gen2, Azure Synapse Analytics, Azure Key Vault) for a seamless data ecosystem. Traditional Data Warehousing & SQL: Design, develop, and maintain schemas and ETL processes for traditional enterprise data warehouses. Demonstrate expert-level proficiency in SQL for complex data manipulation, querying, and optimization within relational database systems. Mandatory Skills: Experience in Databricks and Databricks Workflows and Orchestration Python: Hands-on experience in automation and scripting. Azure: Strong knowledge of Data Lakes, Data Warehouses, and cloud architecture. Solution Architecture: Experience in designing web applications and data engineering solutions. DevOps Basics: Familiarity with Jenkins and CI/CD pipelines. Communication: Excellent verbal and written communication skills. Fast Learner: Ability to quickly grasp new technologies and adapt to changing requirements. Cloud Integration: Experience integrating Databricks with other cloud services (e.g., Azure Data Lake Storage Gen2, Azure Synapse Analytics, Azure Key Vault) for a seamless data ecosystem Extensive experience with Spark (PySpark, Spark SQL) for large-scale data processing Additional Information: Qualifications - BE, MS, M.Tech or MCA. Certifications: Databricks Certified Associat
Posted 1 week ago
5.0 - 10.0 years
14 - 19 Lacs
Bengaluru
Remote
Role: Azure Specialist-DBT Location: Bangalore Mode: Remote Education and Work Experience Requirements: •Overall, 5 to 9 years of experience in IT Industry. Min 6 years of experience working on Data Engineering. Translate complex business requirements into analytical SQL views using DBT Support data ingestion pipelines using Airflow and Data Factory Develop DBT macros to enable scalable and reusable code automation Mandatory Skills: Strong experience with DBT (Data Build Tool)( (or strong SQL / relational DWH knowledge)-Must have Proficiency in SQL and strong understanding of relational data warehouse concept Hands-on experience with Databricks (primarily Databricks SQL) good to have Familiarity with Apache Airflow and Azure Data Factory – nice to have Experience working with Snowflake – nice to have Additional Information: Qualifications - BE, MS, M.Tech or MCA. Certifications: Azure Big Data, Databricks Certified Associate
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough