Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 - 12.0 years
30 - 40 Lacs
bengaluru
Remote
Role & responsibilities This role is responsible for the design, development, and maintenance of data integration and reporting solutions. The ideal candidate will possess expertise in Databricks and strong skills in SQL Server, SSIS and SSRS, and experience with other modern data engineering tools such as Azure Data Factory. This position requires a proactive and results-oriented individual with a passion for data and a strong understanding of data warehousing principles. The position requires leading a team and working with a large US client, building new Data engineering solutions ground up.
Posted 19 hours ago
3.0 - 6.0 years
0 Lacs
thiruvananthapuram, kerala, india
On-site
While technology is the heart of our business, a global and diverse culture is the heart of our success. We love our people and we take pride in catering them to a culture built on transparency, diversity, integrity, learning and growth. If working in an environment that encourages you to innovate and excel, not just in professional but personal life, interests you- you would enjoy your career with Quantiphi! Role: Senior Data Engineer (Databricks) Experience Level: 3 to 6 Years Work location: Mumbai, Bengaluru, Trivandrum Role & Responsibilities: Design and implement data solutions (ETL pipelines, Processing and Transformation logic) using Snowflake as a key platform. Design Virtual warehouses which are optimized for performance and cost (auto scaling, idle time etc) Write SQL , stored procedures to implement business logic, other critical requirements like encryption, data masking etc Implement solutions with features like Snowpipe, tasks, streams,dynamic tables etc Design the solutions by leveraging key concepts around the architecture of Snowflake, its data distribution and partitioning mechanism, data caching etc. Identify queries which take more time , look at the logs etc and provide solutions to optimize those queries Must have skills: 3+ years experience of Hands-on in data structures, distributed systems, Spark, SQL, PySpark, NoSQL Databases Strong software development skills in at least one of: Python, Pyspark or Scala. Develop and maintain scalable & modular data pipelines using databricks & Apache Spark Experience building and supporting large-scale systems in a production environment Experience working on either of the cloud storages - AWS S3, Azure data lake, GCS buckets Exposure to the ELT tools offered by the cloud platforms like, ADF, AWS Glue, Google dataflow Integrate Databricks with other cloud services like AWS, Azure or Google Cloud Implement and optimize Spark jobs, data transformations, and data processing workflows in Databricks Managing and configuring databricks environments, including clusters and notebooks Experience working with different file formats - parquet, databricks delta lake Experience working with data governance on databricks, implementing unity catalog, delta share
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
haryana
On-site
As a Lead Azure Data Engineer at Syneos Health, you will play a crucial role in utilizing your expertise to drive successful outcomes in the biopharmaceutical solutions industry. Here is what you can expect in this role: **Role Overview:** - You will lead a small team of Azure Data Engineers, bringing your 8+ years of overall IT experience to the table. - Your responsibilities will include Database Development, Data Integration initiatives, and building ETL/ELT solutions using Azure data integration tools such as Azure Data Factory, Azure Functions, Logic Apps, and Databricks. - You will be highly experienced with DLTs, Unity Catalog, Databricks SQL, and Workflows, utilizing hands-on experience in Azure Databricks & Pyspark to develop ETL pipelines. - Strong experience in reading and writing queries in Azure or any relational database is required, along with prior knowledge of best practices in SDLC and use of source control management tools, preferably Azure DevOps to GitHub integration. - You will work in a production environment for multiple clients in a compliance industry, translating customer needs into technical requirements and contributing to data management framework implementation and adoption. - Good verbal and written communication skills, excellent interpersonal skills, troubleshooting abilities, and the knack for problem-solving will be key traits for success in this role. **Key Responsibilities:** - Lead a small team of Azure Data Engineers - Develop ETL/ELT solutions using Azure data integration tools - Utilize hands-on experience in Azure Databricks & Pyspark for ETL pipelines - Read and write queries in Azure or any relational database - Implement best practices in SDLC and use source control management tools - Translate customer needs into technical requirements - Contribute to data management framework implementation and adoption **Qualifications Required:** - 8+ years of overall IT experience with a minimum of 5 years leading a small team of Azure Data Engineers - 5+ years of experience in Database Development and Data Integration initiatives - 5+ years of experience building ETL/ELT solutions using Azure data integration tools - Hands-on experience in Azure Databricks & Pyspark - Strong experience in reading and writing queries in Azure or any relational database - Prior knowledge of best practices in SDLC and use of source control management tools - Experience working in a production environment for multiple clients in a compliance industry - Ability to work in a team and translate customer needs into technical requirements - Good verbal and written communication skills, excellent interpersonal skills, and problem-solving abilities Please note that tasks, duties, and responsibilities may vary, and the Company may assign other responsibilities as needed. The Company values diversity and inclusion to create a workplace where everyone feels they belong.,
Posted 5 days ago
11.0 - 19.0 years
30 - 45 Lacs
hyderabad, pune, bengaluru
Hybrid
Dear Candidate, One of the top MNC is Immediate Hiring For Azure Architect role. Role: Azure Architect Skills : Databricks, PySpark, and architecture solutions Experience: 11 - 19 Years Location: Hyderabad, Pune, Coimbatore, Bangalore Employment Type : Fulltime Work Mode : Hybrid Notice Period : Immediate - 1 week Roles and Responsibilities: Responsibilities: Lead migration from Native Spark to Databricks. Architect migration to Databricks. Build Data Governance solutions using tools like Unity Catalog and StarBurst. Develop orchestration workflows using Databricks and ADF. Implement CICD pipelines for Databricks in Azure DevOps. Process near-real-time data through Auto Loader and DLT pipelines. Design secure, scalable infrastructure for Delta Lake and Spark SQL. Optimize Databricks environments for cost-effectiveness. Extract complex business logic from on-prem solutions like SSIS, Informatica, Vertica, etc., into PySpark. Build analytical dashboards on top of the Databricks Lakehouse. If anyone interested candidates can reach out to balasree.j@kanarystaffing.com
Posted 5 days ago
4.0 - 10.0 years
0 Lacs
haryana
On-site
As a Data Engineer at Capgemini, you will have the opportunity to design, build, and optimize modern data solutions using your expertise in SQL, Spark SQL, Databricks, Unity Catalog, and PySpark. With a focus on data warehousing concepts and best practices, you will contribute to the development of scalable and high-performance data platforms. You will bring 4-10 years of experience in Data Engineering/ETL Development, showcasing your strong skills in SQL, Spark SQL, and Databricks on Azure or AWS. Your proficiency in PySpark for data processing and knowledge of Unity Catalog for data governance will be key in ensuring the success of data solutions. Additionally, your understanding of data warehousing principles, dimensional modeling, and familiarity with Azure Data Services or AWS Data Services will be advantageous. Your role will involve collaborating with teams globally, demonstrating strong customer orientation, decision-making, problem-solving, communication, and presentation skills. You will have the opportunity to shape compelling solutions, interact with multicultural teams, and exhibit leadership qualities to achieve common goals through collaboration. Capgemini, a global business and technology transformation partner, values diversity and innovation in a sustainable world. With a legacy of over 55 years, Capgemini is known for delivering end-to-end services and solutions in AI, cloud, data, and more. Join us in unlocking the value of technology and contributing to a more inclusive world.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
bengaluru, karnataka, india
On-site
Role: Assistant Manager - Data Engineering Experience: 4 to 8 years Location: Bengaluru, Karnataka , India (BLR) Job Description: Communication and leadership experience, with experience initiating and driving projects. Experience with data sets, Hadoop, and data modernisation tools. Experience in SQL or similar languages and in any one cloud platform AWS/Azure/GCP. Development experience in at least one object-oriented language (Python, Java, etc.). BA/BS in Computer Science, Math, Physics, or other technical fields. Job Responsibility: Data Engineering & Architecture Design and implement scalable and optimized data pipelines on Databricks using Delta Lake, PySpark, and SQL. Develop ETL/ELT frameworks for batch and streaming data processing. Ensure data quality, governance, and observability using Unity Catalog, Great Expectations, or custom validations. Optimize Spark jobs for performance, cost, and scalability. Cloud & Infrastructure (Azure/AWS/GCP) Deploy and manage Databricks clusters, workspaces, and Jobs. Work with Terraform or ARM templates for infrastructure automation. Integrate cloud-native services like Azure Data Factory, AWS Glue, or GCP Cloud Composer. MLOps & CI/CD Automation Implement CI/CD pipelines for Databricks notebooks, workflows, and ML models. Work with MLflow for model tracking and lifecycle management. Automate data pipelines using Azure DevOps, GitHub Actions, or Jenkins. Leadership & Collaboration Lead a team of data engineers, ensuring best practices and code quality. Collaborate with data scientists, analysts, and business teams to understand requirements. Conduct performance reviews, technical mentoring, and upskilling sessions. Skills: Strong hands-on experience in Databricks, Apache Spark (PySpark/Scala), and Delta Lake. Expertise in SQL, ETL/ELT pipelines, and data modelling. Experience with Azure, AWS, or GCP cloud platforms. Knowledge of MLOps, MLflow, and CI/CD best practices. Experience in workflow orchestration using Databricks Workflows, Airflow, or Prefect. Understanding of cost optimization, cluster tuning, and performance monitoring in Databricks. Strong leadership, stakeholder management, and mentoring skills. Experience with data lakehouse architectures and Unity Catalog. Hands-on with Terraform, Infrastructure-as-Code (IaC), or Kubernetes. Familiarity with data governance, security, and privacy frameworks. Show more Show less
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As a Senior Databricks Developer at Newscape Consulting, you will play a crucial role in our data engineering team, focusing on building scalable and efficient data pipelines using Databricks, Apache Spark, Delta Lake, and cloud-native services (Azure/AWS/GCP). Your responsibilities will include collaborating closely with data architects, data scientists, and business stakeholders to deliver high-performance, production-grade solutions that enhance user experience and productivity in the healthcare industry. Your key skills should include a strong hands-on experience with Databricks including Workspace, Jobs, DLT, Repos, and Unity Catalog. Proficiency in PySpark, Spark SQL, and optionally Scala is essential. You should also have a solid understanding of Delta Lake, Lakehouse architecture, and medallion architecture. Additionally, proficiency in at least one cloud platform such as Azure, AWS, or GCP is required. Experience in CI/CD for Databricks using tools like Azure DevOps or GitHub Actions, strong SQL skills, and familiarity with data warehousing concepts are essential for this role. Knowledge of data governance, lineage, and catalog tools like Unity Catalog or Purview will be beneficial. Familiarity with orchestration tools like Airflow, Azure Data Factory, or Databricks Workflows is also desired. This position is based in Pune, India, and is a full-time role with the option to work from the office. Strong communication, problem-solving, and stakeholder management skills are key attributes that we are looking for in the ideal candidate for this role.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As an Azure Databricks Lead, you will be responsible for designing, implementing, and optimizing data solutions using Azure Databricks. Your expertise will be crucial in creating robust data pipelines, ensuring data quality, and improving overall performance. Collaborating with cross-functional teams, you will deliver high-quality solutions that meet business requirements. You will design and develop scalable data processing pipelines using Azure Databricks and PySpark. Implementing ETL processes, you will ingest, transform, and load data from various sources. Collaboration with data engineers and architects will ensure efficient data movement and transformation. Establishing data quality checks and validation rules within Azure Databricks, you will monitor data quality metrics and address anomalies promptly. Working closely with data governance teams, you will maintain data accuracy and consistency. Utilizing Azure Databricks Unity Catalog, you will manage metadata, tables, and views, integrating Databricks assets seamlessly with other Azure services. Proper documentation and organization of data assets will be ensured. Your expertise in Delta Lake will involve understanding and utilizing Delta Lake features for reliable data storage and versioning. Performance optimization through profiling and analyzing query performance, optimizing SQL queries, Spark jobs, and transformations will be your responsibility. Managing compute resources effectively within Azure Databricks clusters, you will scale clusters dynamically based on workload requirements, monitoring resource utilization and cost efficiency. Seamless integration of Azure Databricks with various source systems, handling schema evolution and changes in source data will be part of your role. Converting existing stored procedures (e.g., from SQL Server) into Databricks-compatible code, optimizing and enhancing stored procedures for better performance within Databricks will be essential. Experience in SSRS conversion is preferred. Qualifications and Skills: - Education: Bachelor's degree in computer science, Information Technology, or a related field. - Experience: Minimum 5 years of experience architecting and building data platforms on Azure. Proficiency in Azure Databricks, PySpark, and SQL. Familiarity with Delta Lake concepts. - Certifications (preferred): Microsoft Certified: Azure Data Engineer Associate or similar. Databricks Certified Associate Developer for Apache Spark.,
Posted 1 week ago
14.0 - 20.0 years
27 - 37 Lacs
pune, chennai, bengaluru
Hybrid
If interested, Pls share the below details along with your resume on PriyaM4@hexaware.com Total Exp CTC ECTC NP Loc Strong Proficiency in Databricks - Unity Catalogue, Tagging, KPI Anonymizing Extraction having Solution Architect experience.
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
You should have 5+ years of experience with skills in DataBricks, Delta Lake, pyspark or scala spark, unity catalog. Good to have skills in Azure/AWS Cloud. You will be responsible for ingesting and transforming batch and streaming data on the Databricks Lakehouse Platform. Excellent communication skills are required. Your responsibilities will include overseeing and supporting processes by reviewing daily transactions, reviewing performance dashboards, supporting the team in improving performance parameters, recording and tracking all queries received, ensuring standard processes are followed, resolving client queries within defined SLAs, developing understanding of processes/products for team members, analyzing call logs, identifying red flags, and escalating serious client issues. You will also handle technical escalations through effective diagnosis and troubleshooting, manage and resolve technical roadblocks/escalations, escalate issues when necessary, provide product support and resolution to clients, troubleshoot client queries in a user-friendly manner, offer alternative solutions to retain customers, communicate effectively with clients, and follow up with customers to record feedback. In addition, you will be responsible for building people capability to ensure operational excellence, mentoring and guiding Production Specialists, collating and conducting trainings to bridge skill gaps, staying current with product features, enrolling in trainings as per client requirements, identifying common problems and recommending resolutions, and updating job knowledge through self-learning opportunities and personal networks. Your performance will be measured based on process metrics like the number of cases resolved per day, compliance to process and quality standards, meeting SLAs, Pulse score, customer feedback, and Team Management metrics like productivity, efficiency, and absenteeism. Capability development will be measured based on completed triages and technical test performance. Join Wipro, a company focused on digital transformation and reinvention. We are looking for individuals who are inspired by reinvention and want to evolve their careers and skills. Wipro is a place that empowers you to design your own reinvention and realize your ambitions. Applications from people with disabilities are explicitly welcome.,
Posted 2 weeks ago
9.0 - 13.0 years
0 Lacs
haryana
On-site
About Markovate: At Markovate, you don't just follow trends, we drive them. We transform businesses through innovative AI and digital solutions that turn vision into reality. Our team harnesses breakthrough technologies to craft bespoke strategies that align seamlessly with our clients" ambitions. From AI Consulting And Gen AI Development To Pioneering AI Agents And Agentic AI, We Empower Our Partners To Lead Their Industries With Forward-thinking Precision And Unmatched Overview. We are seeking a highly experienced and innovative Senior Data Engineer with a strong background in hybrid cloud data integration, pipeline orchestration, and AI-driven data modelling. Requirements: - 9+ years of experience in data engineering and data architecture. - Excellent communication and interpersonal skills, with the ability to engage with teams. - Strong problem-solving, decision-making, and conflict-resolution abilities. - Proven ability to work independently and lead cross-functional teams. - Ability to work in a fast-paced, dynamic environment and handle sensitive issues with discretion and professionalism. - Ability to maintain confidentiality and handle sensitive information with attention to detail with discretion. - The candidate must have strong work ethics and trustworthiness. - Must be highly collaborative and team-oriented with a commitment to Responsibilities. Responsibilities: - Design and develop hybrid ETL/ELT pipelines using AWS Glue and Azure Data Factory (ADF). - Process files from AWS S3 and Azure Data Lake Gen2, including schema validation and data profiling. - Implement event-based orchestration using AWS Step Functions and Apache Airflow (Astronomer). - Develop and maintain bronze, silver, gold data layers using DBT or Coalesce. - Create scalable ingestion workflows using Airbyte, AWS Transfer Family, and Rivery. - Integrate with metadata and lineage tools like Unity Catalog and Open Metadata. - Build reusable components for schema enforcement, EDA, and alerting (e.g., MS Teams). - Work closely with QA teams to integrate test automation and ensure data quality. - Collaborate with cross-functional teams including data scientists and business stakeholders to align solutions with AI/ML use cases. - Document architectures, pipelines, and workflows for internal stakeholders. Experience with: - Cloud platforms such as AWS (Glue, Step Functions, Lambda, S3, CloudWatch, SNS, Transfer Family) and Azure (ADF, ADLS Gen2, Azure Functions, Event Grid). - Transformation and ELT tools like Databricks (PySpark), DBT, Coalesce, and Python. - Data ingestion methods including Airbyte, Rivery, SFTP/Excel files, and SQL Server extracts. - Data modeling techniques including CEDM, Data Vault 2.0, and Dimensional Modelling. - Orchestration tools such as AWS Step Functions, Airflow (Astronomer), and ADF Triggers. - Monitoring and logging tools like CloudWatch, AWS Glue Metrics, MS Teams Alerts, and Azure Data Explorer (ADX). - Data governance and lineage tools: Unity Catalog, OpenMetadata, and schema drift detection. - Version control and CI/CD using GitHub, Azure DevOps, CloudFormation, Terraform, and ARM templates. - Cloud data platforms, ETL tools, AI/Generative AI concepts and frameworks, data warehousing solutions, big data technologies, SQL, and at least one programming language. Great to have: - Experience with cloud data platforms (e.g., AWS, Azure, GCP) and their data and AI services. - Knowledge of ETL tools and frameworks (e.g., Apache NiFi, Talend, Informatica). - Deep understanding of AI/Generative AI concepts and frameworks (e.g., TensorFlow, PyTorch, Hugging Face, OpenAI APIs). - Experience with data modeling, data structures, and database design. - Proficiency with data warehousing solutions (e.g., Redshift, BigQuery, Snowflake). - Hands-on experience with big data technologies (e.g., Hadoop, Spark, Kafka). - Proficiency in SQL and at least one programming language. What it's like to be at Markovate: - At Markovate, we thrive on collaboration and embrace every innovative idea. - We invest in continuous learning to keep our team ahead in the AI/ML landscape. - Transparent communication is key, every voice at Markovate is valued. - Our agile, data-driven approach transforms challenges into opportunities. - We offer flexible work arrangements that empower creativity and balance. - Recognition is part of our DNA; your achievements drive our success. - Markovate is committed to sustainable practices and positive community impact. - Our people-first culture means your growth and well-being are central to our mission. - Location: hybrid model 2 days onsite.,
Posted 2 weeks ago
0.0 years
0 Lacs
noida, uttar pradesh, india
On-site
Why Join Us Are you inspired to grow your career at one of India's Top 25 Best Workplaces in IT industry Do you want to do the best work of your life at one of the fastest growing IT services companies Do you aspire to thrive in an award-winning work culture that values your talent and career aspirations It's happening right here at Iris Software. About Iris Software At Iris Software, our vision is to be our client's most trusted technology partner, and the first choice for the industry's top professionals to realize their full potential. With over 4,300 associates across India, U.S.A, and Canada, we help our enterprise clients thrive with technology-enabled transformation across financial services, healthcare, transportation & logistics, and professional services. Our work covers complex, mission-critical applications with the latest technologies, such as high-value complex Application & Product Engineering, Data & Analytics, Cloud, DevOps, Data & MLOps, Quality Engineering, and Business Automation. Working at Iris Be valued, be inspired, be your best. At Iris Software, we invest in and create a culture where colleagues feel valued, can explore their potential, and have opportunities to grow. Our employee value proposition (EVP) is about Being Your Best - as a professional and person. It is about being challenged by work that inspires us, being empowered to excel and grow in your career, and being part of a culture where talent is valued. We're a place where everyone can discover and be their best version. Job Description Architect and manage Databricks workspaces, clusters, and jobs for scalable data processing and ML workloads. Implement secure data lakehouse architectures with Delta Lake, Unity Catalog, and workspace isolation. Optimize performance and cost-efficiency of Databricks deployments across cloud providers. Implement centralized logging, monitoring, and alerting for Databricks and cloud resources. Ensure auditability and traceability of data access and transformations. Support governance frameworks with metadata management and data cataloging. Mandatory Competencies Cloud - Azure - Azure Data Factory (ADF), Azure Databricks, Azure Data Lake Storage, Event Hubs, HDInsight Data Science and Machine Learning - Data Science and Machine Learning - Databricks Cloud - GCP - Cloud Functions Cloud - Azure - Azure Devops, Azure Pipelines, Azure CLI Beh - Communication and collaboration Cloud - AWS - Amazon IAM, AWS Secrets Manager, AWS KMS, AWS Cognito Perks and Benefits for Irisians At Iris Software, we offer world-class benefits designed to support the financial, health and well-being needs of our associates to help achieve harmony between their professional and personal growth. From comprehensive health insurance and competitive salaries to flexible work arrangements and ongoing learning opportunities, we're committed to providing a supportive and rewarding work environment. Join us and experience the difference of working at a company that values its employees success and happiness.
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
You will be working as a full-time on-site Databricks Developer in Hyderabad. Your responsibilities will include designing, developing, and maintaining highly scalable efficient data pipelines using Databricks, PySpark, and other related technologies to process large-scale datasets. Collaboration with cross-functional teams to design and implement data engineering and analytics solutions will be a key part of your role. To excel in this role, you should have expertise in using Unity Catalog and Metastore, optimizing Databricks notebooks, delta lake, and DLT pipelines. You should also be experienced in using Databricks SQL, implementing highly configured data processing solutions, and solutions for data quality and reconciliation requirements. A solid understanding of data governance frameworks, policies, and best practices for data management, security, and compliance is essential. Additionally, you should have a strong knowledge of data modelling techniques and be proficient in PySpark or SparkSQL.,
Posted 2 weeks ago
10.0 - 12.0 years
0 Lacs
pune, maharashtra, india
On-site
Key Result Areas and Activities: Design and implement Lakehouse architectures using Databricks, Delta Lake, and Apache Spark. Lead the development of data pipelines, ETL/ELT processes, and data integration strategies. Collaborate with business and technical teams to define data architecture standards, governance, and security models. Optimize performance and cost-efficiency of Databricks clusters and jobs. Provide technical leadership and mentorship to data engineers and developers. Integrate Databricks with cloud platforms (Azure, AWS, or GCP) and enterprise systems. Evaluate and recommend tools and technologies to enhance the data ecosystem. Ensure compliance with data privacy and regulatory requirements. Contribute to proposal and presales activities Work and Technical Experience Must-have Skills: Expertise in data engineering, data architecture, or analytics. Hands-on experience on Databricks and Apache Spark. Hands-on experience on Snowflake Strong proficiency in Python, SQL, and PySpark. Deep understanding of Delta Lake, Lakehouse architecture, and data mesh principles. Deep understanding of Data Governance and Unity Catalog. Experience with cloud platforms (Azure preferred, AWS or GCP acceptable). Good to have Skills: Good understanding of the CI/CD pipeline Working experience with GitHub Experience in providing data engineering solutions while maintaining balance between architecture requirements, required efforts and customer specific needs in other tools Qualification: Bachelors degree in computer science, engineering, or related field Demonstrated continued learning through one or more technical certifications or related methods Over 10+ years of relevant experience in ETL tools Relevant experience in Retail domain Qualities: Proven problem-solving, and troubleshooting abilities, with a high degree of adaptability; well-versed in the latest trends in the data engineering field Ability to handle multiple tasks effectively, maintain a professional attitude, and work well in a team Excellent interpersonal and communication skills, with a customer-focused approach and keen attention to detail Ability to translate technical information into clear, business-friendly language Able to work with teams and clients in different time zones and in rotational shift Research focused mindset Champion efforts to ensure appropriate development best practices and assist with strategy and roadmap development. Show more Show less
Posted 2 weeks ago
6.0 - 8.0 years
8 - 10 Lacs
noida, pune, bengaluru
Hybrid
Work Mode: Hybrid (3 days WFO) Locations: Bangalore, Noida, Pune, Mumbai, Hyderabad (Candidates must be in Accion cities to collect assets and attend in-person meetings as required). Key Requirements: Technical Skills: Databricks Expertise: 5+ years of hands-on experience in data engineering/ETL using Databricks on AWS/Azure cloud infrastructure. Proficiency in Delta Lake, Unity Catalog, Delta Sharing, Delta Live Tables (DLT), MLflow, and Databricks SQL. Experience with Databricks CI/CD tools (e.g., BitBucket, GitHub Actions, Databricks CLI). Data Warehousing & Engineering: Strong understanding of data warehousing concepts (Dimensional, SCD2, Data Vault, OBT, etc.). Proven ability to implement highly performant data ingestion pipelines from multiple sources. Experience integrating end-to-end Databricks pipelines to ensure data quality and consistency. Programming: Strong proficiency in Python and SQL. Basic working knowledge of API or stream-based data extraction processes (e.g., Salesforce API, Bulk API). Cloud Technologies: Preferred experience with AWS services (e.g., S3, Athena, Glue, Lambda). Power BI: 3+ years of experience in Power BI and data warehousing for root cause analysis and business improvement opportunities. Additional Skills: Working knowledge of Data Management principles (quality, governance, security, privacy, lifecycle management, cataloging). Nice to have: Databricks certifications and AWS Solution Architect certification. Nice to have: Experience with building data pipelines from business applications like Salesforce, Marketo, NetSuite, Workday, etc. Responsibilities: Develop, implement, and maintain highly efficient ETL pipelines on Databricks. Perform root cause analysis and identify opportunities for data-driven business improvements. Ensure quality, consistency, and governance of all data pipelines and repositories. Work in an Agile/DevOps environment to deliver iterative solutions. Collaborate with cross-functional teams to meet business requirements. Stay updated on the latest Databricks and AWS features, tools, and best practices. Work Schedule: Regular: 11:00 AM to 8:00 PM. Flexibility is required for project-based overlap. Interested candidates should share their resumes with the following details: Current CTC Expected CTC Preferred Location: Bangalore, Noida, Pune, Mumbai, Hyderabad Notice Period Contact Information:
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
As a Senior Azure Data Engineer, you will be responsible for designing, building, and optimizing scalable data pipelines for large-scale data processing using Python and Apache Spark. Your role will involve leveraging Azure Databricks for big data processing, working with PySpark to analyze large datasets, and utilizing Azure Data Factory for seamless data integration across cloud environments. Additionally, you will implement and maintain solutions using Azure Data Lake Storage for big data analytics and transformation. Your key responsibilities will include writing efficient and reusable Python code, working with core Azure data services like Azure SQL Database and Azure Synapse Analytics, and designing and maintaining complex SQL queries for managing relational databases. You will also collaborate with cross-functional teams to understand data requirements, ensure data management integrity and security, and provide mentorship to junior team members. To excel in this role, you should have proficiency in Python programming, hands-on experience with libraries such as Pandas and NumPy, and strong skills in SQL for querying and managing relational databases. In-depth knowledge of Azure technologies, including Azure Data Factory, Azure Cosmos DB, and Azure Blob Storage, is essential. Your problem-solving skills, analytical mindset, and excellent communication abilities will be valuable assets in this position. The ideal candidate for this role will hold a B.Tech degree in any branch or specialization, with 3-7 years of relevant experience. Architectural experience and familiarity with infrastructure as code tools like Terraform or Biceps will be considered a plus. If you are passionate about data engineering, cloud platforms, and working with complex data challenges, we encourage you to apply for this exciting opportunity.,
Posted 2 weeks ago
10.0 - 14.0 years
0 Lacs
karnataka
On-site
You should have over ten years of experience in software development, including recent years in a leadership position with increasing responsibilities. Your expertise should cover various areas such as C#, .Net, designing and developing cloud-native solutions on platforms like Azure, AWS, and Google Cloud Platform, using Infrastructure as Code tools like Terraform and Pulumi, employing configuration management tools like Ansible and Chef, working with containerization and orchestration technologies like Docker and Kubernetes, and integrating with Native and third-party Databricks solutions. Additionally, you should possess extensive experience specifically in Azure. It would be advantageous if you have a background in designing and implementing data security and governance platforms that comply with standards such as HIPPA and SOC 2. Initially, this role will require you to work in the office (WFO) in Bangalore (Whitefield) for the probation period of three months. After this period, the work mode will shift to a Hybrid model.,
Posted 2 weeks ago
7.0 - 12.0 years
6 - 16 Lacs
pune, bengaluru
Work from Office
Job Description: Python, SQL, ADF, Databricks ( PySpark, DLT, Unity Catalog, Performance Tuning, Cost Optimization ) along with leadership(self Databricks Associate or professional Certification.
Posted 2 weeks ago
7.0 - 11.0 years
0 Lacs
karnataka
On-site
NTT DATA strives to hire exceptional, innovative, and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking an AI Solution Architect to join our team in Bangalore, Karnataka (IN-KA), India (IN). Role: - AI Solution Architect Experience: - 7+ Years Notice Period: - 30-60 Days Project Overview: We are seeking a seasoned AI Architect with strong experience in Generative AI and Large Language Models (LLMs) including OpenAI, Claude, and Gemini to lead the design, orchestration, and deployment of intelligent solutions across complex use cases. You will architect conversational systems, feedback loops, and LLM pipelines with robust data governance, leveraging the Databricks platform and Unity Catalog for enterprise-scale scalability, lineage, and compliance. Role Scope / Deliverables: - Architect end-to-end LLM solutions for chatbot applications, semantic search, summarization, and domain-specific assistants. - Design modular, scalable LLM workflows including prompt orchestration, RAG (retrieval-augmented generation), vector store integration, and real-time inference pipelines. - Leverage Databricks Unity Catalog for: - Centralized governance of AI training and inference datasets - Managing metadata, lineage, access controls, and audit trails - Cataloging feature tables, vector embeddings, and model artifacts - Collaborate with data engineers and platform teams to ingest, transform, and catalog datasets used for fine-tuning and prompt optimization. - Integrate feedback loop systems (e.g., user input, signal-driven reinforcement, RLHF) to continuously refine LLM performance. - Optimize model performance, latency, and cost using a combination of fine-tuning, prompt engineering, model selection, and token usage management. - Oversee secure deployment of models in production, including access control, auditability, and compliance alignment via Unity Catalog. - Guide teams on data quality, discoverability, and responsible AI practices in LLM usage. Key Skills: - 7+ years in AI/ML solution architecture, with 2+ years focused on LLMs and Generative AI. - Strong experience working with OpenAI (GPT-4/o), Claude, Gemini, and integrating LLM APIs into enterprise systems. - Proficiency in Databricks, including Unity Catalog, Delta Lake, MLflow, and cluster orchestration. - Deep understanding of data governance, metadata management, and data lineage in large-scale environments. - Hands-on experience with chatbot frameworks, LLM orchestration tools (LangChain, LlamaIndex), and vector databases (e.g., FAISS, Weaviate, Pinecone). - Strong Python development skills, including notebooks, REST APIs, and LLM orchestration pipelines. - Ability to map business problems to AI solutions, with strong architectural thinking and stakeholder communication. - Familiarity with feedback loops and continuous learning patterns (e.g., RLHF, user scoring, prompt iteration). - Experience deploying models in cloud-native and hybrid environments (AWS, Azure, or GCP). Preferred Qualifications: - Experience fine-tuning or optimizing open-source LLMs (e.g., LLaMA, Mistral) with tools like LoRA/QLoRA. - Knowledge of compliance requirements (HIPAA, GDPR, SOC2) in AI systems. - Prior work building secure, governed LLM applications in highly regulated industries. - Background in data cataloging, enterprise metadata management, or ML model registries. About NTT DATA: NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize, and transform for long-term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation, and management of applications, infrastructure, and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com,
Posted 3 weeks ago
5.0 - 10.0 years
15 - 25 Lacs
hyderabad/secunderabad, bangalore/bengaluru, delhi / ncr
Hybrid
Ready to build the future with AI? At Genpact, we don't just keep up with technology we set the pace. AI and digital innovation are redefining industries, and were leading the charge. Genpacts AI Gigafactory , our industry-first accelerator, is an example of how were scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, innovation-driven environment, love building and deploying cutting-edge AI solutions, and want to push the boundaries of whats possible, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Principal Consultant- Databricks Developer AWS! In this role, the Databricks Developer is responsible for solving the real world cutting edge problem to meet both functional and non-functional requirements. Responsibilities • Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. • Work with architect and lead engineers for solutions to meet functional and non-functional requirements. • Demonstrated knowledge of relevant industry trends and standards. • Demonstrate strong analytical and technical problem-solving skills. • Must have experience in Data Engineering domain . Qualifications we seek in you! Minimum qualifications • Bachelor’s Degree or equivalency (CS, CE, CIS, IS, MIS, or engineering discipline) or equivalent work experience. • Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. • Work with architect and lead engineers for solutions to meet functional and non-functional requirements. • Demonstrated knowledge of relevant industry trends and standards. • Demonstrate strong analytical and technical problem-solving skills. • Must have excellent coding skills either Python or Scala, preferably Python. • Must have experience in Data Engineering domain . • Must have implemented at least 2 project end-to-end in Databricks. • Must have at least experience on databricks which consists of various components as below o Delta lake o dbConnect o db API 2.0 o Databricks workflows orchestration • Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. • Must have good understanding to create complex data pipeline • Must have good knowledge of Data structure & algorithms. • Must be strong in SQL and sprak-sql. • Must have strong performance optimization skills to improve efficiency and reduce cost. • Must have worked on both Batch and streaming data pipeline. • Must have extensive knowledge of Spark and Hive data processing framework. • Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB/DynamoDB, ASB/SQS, Cloud databases. • Must be strong in writing unit test case and integration test • Must have strong communication skills and have worked on the team of size 5 plus • Must have great attitude towards learning new skills and upskilling the existing skills. Preferred Qualifications Good to have Unity catalog and basic governance knowledge. • Good to have Databricks SQL Endpoint understanding. • Good To have CI/CD experience to build the pipeline for Databricks jobs. • Good to have if worked on migration project to build Unified data platform. • Good to have knowledge of DBT. • Good to have knowledge of docker and Kubernetes. br /> Why join Genpact? Lead AI-first transformation – Build and scale AI solutions that redefine industries. Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Gain hands-on experience, world-class training, mentorship, and AI certifications to advance your skills Grow with the best – Learn from top engineers, data scientists, and AI experts in a dynamic, fast-moving workplace Committed to ethical AI – Work in an environment where governance, transparency, and security are at the core of everything we build Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the 140,000+ coders, tech shapers, and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.
Posted 3 weeks ago
10.0 - 18.0 years
0 Lacs
bengaluru, karnataka, india
On-site
Role overview Experience : 10- 18 10 plus years experience in software development with at least the last few years in a leadership role with progressively increasing responsibilities 2. Extensive experience in the following areas Proficient in C#, .NET framework, Python Knowledgeable in Angular / React / Node Designing and building cloud-native solutions (Azure, AWS, Google Cloud Platform) Well versed with LLM solutions, RAG, Vectorization Containerization and orchestration technologies (Docker, Kubernetes) Experience in native and third-party Databricks integrations (Delta Live Tables, Lakeflow, Databricks Workflows / Apache Airflow, Unity Catalog) Experience designing and implementing data security and governance platform adhering to compliance standards (HIPPA, SOC 2) preferred Specific Job Knowledge, Skill and Ability Demonstrated success in effectively communicating at all levels of an organization Deep understanding and knowledge on developing products using Microsoft Technologies Ability to lead through influence rather than direct authority Demonstrated successful time management and organization skills Ability to manage and work with a culturally diverse population Ability to work well and productively, always projecting a positive outlook in a fastpaced, deadline-driven environment Ability to anticipate roadblocks, diagnose problems and generate effective solutions Knows how to organize a software development team to maximize quality and output Will promote and encourage opportunities for personal and professional growth in employees Understands how to use metrics to drive process improvements What would you do here Duties and Responsibilities Function as a key member of the software engineering team, leading software engineering development initiatives for LogixHealths internally developed applications 2. Provide decisive and effective technical leadership for all development efforts Develop solution blueprint for all new features and products, bringing in design patterns and latest development guidelines Lead, mentor, and advise the software engineering team. Drive technical debt reduction, design and code reviews, best practice development Consult with infra team on architecture and cost optimization for applications Lead modernization of application and data platform, introducing automation and AI use across all platforms and development methodologies Lead and contribute to the creation of a self-service platforms for software development, infrastructure, and data analytics. Hands-on to take complex development tasks, prototypes, troubleshooting Collaborate with engineers, product, and business leaders to ensure platforms are integrated with other systems and technologies. Participate in development process by identifying potential weak points. Lead solutions development using technical judgment, input from experts and the involvement of other systems development partners as appropriate Clearly and consistently communicate product vision to the team. Guide the team to achieve this vision Show more Show less
Posted 3 weeks ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As a highly skilled Solution Architect with deep expertise in Databricks Lakehouse and proven experience operationalizing unstructured document AI pipelines in regulated industries, you will play a crucial role in designing and leading end-to-end architectures that transform complex, high-volume documents into governed, queryable knowledge graphs inside Databricks. This transformation will enable automation, compliance, and downstream AI applications, providing Databricks customers with reliable and trustworthy solutions. Your responsibilities will include leading the design and implementation of in-Lakehouse pipelines for unstructured and structured data, utilizing Delta Live Tables, Unity Catalog, and MLflow. Additionally, you will architect solutions for ingestion, OCR, and LLM-based parsing of scanned PDFs, legal/medical records, and complex forms. Designing confidence-tiered pipelines, translating extracted entities and relationships into graph-friendly Delta Gold tables, defining security, lineage, and classification rules, and integrating Databricks outputs with downstream applications will also be key aspects of your role. Collaboration and mentorship are essential components, as you will work closely with data engineers, ML engineers, and domain SMEs to translate business requirements into scalable architectures. Furthermore, you will mentor teams on Databricks and document AI best practices to ensure successful project execution. To excel in this role, you should possess 5+ years of experience in solution or data architecture, with at least 2+ years of experience delivering Databricks-based solutions. Hands-on experience with Unity Catalog, Delta Live Tables, Databricks SQL, and Partner Connect integrations is crucial. Expertise in designing Lakehouse architectures for structured and unstructured data, understanding of OCR integration patterns, experience with confidence scoring and human-in-the-loop patterns, familiarity with knowledge graph concepts, strong SQL skills, and knowledge of cloud data ecosystems are also required. Preferred qualifications include Databricks certifications, experience in delivering document AI pipelines in regulated verticals, familiarity with data mesh or federated governance models, and a background in MLOps and continuous improvement for extraction models. If you are passionate about leveraging technology to drive digital transformation and are eager to contribute to a diverse and empathetic work environment, we welcome your application for this exciting opportunity in Pune, India.,
Posted 4 weeks ago
3.0 - 14.0 years
0 Lacs
pune, maharashtra
On-site
As a Director / Associate Director in Cloud Engineering (Azure Stack), you will be responsible for leading multiple client engagements, driving end-to-end project delivery, and managing high-performing engineering teams specializing in Databricks. Your role will involve hands-on technical expertise, project and people management skills, and a proven track record of delivering large-scale data engineering solutions in cloud-native environments. With at least 14 years of experience in data engineering, including 3+ years in leadership or director-level roles, you will be expected to have a strong background in Databricks, Delta Lake, and cloud data architecture. Your responsibilities will include overseeing and delivering multiple Databricks-based data engineering projects, managing project budgets and staffing, as well as mentoring engineering teams and collaborating with clients on architecture and strategy. To excel in this role, you must possess strong hands-on experience with Databricks, deep expertise in Azure data services, and a background in building scalable ETL/ELT pipelines. Your qualifications should include full-platform expertise in Databricks, proficiency in Azure cloud environments, and hands-on experience with Spark, PySpark, and Python for scalable data processing. Additionally, you should be adept at designing data warehouses or lakehouses, implementing Unity Catalog, and working with business teams to build dashboards. Experience with Power BI or similar reporting tools will also be beneficial in this position. If you have excellent communication and leadership skills in fast-paced environments, a proven track record of project delivery and client success, and a passion for driving innovation in cloud engineering, we invite you to apply for this challenging and rewarding role as a Director / Associate Director in Cloud Engineering (Azure Stack).,
Posted 4 weeks ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Data Architect in the Utilities domain, you will lead the design and implementation of modern, scalable, and secure enterprise data architectures. Your role involves defining data strategies, creating architecture blueprints, and guiding delivery teams to ensure alignment with business objectives and technical standards. You must possess strong expertise in data modeling, platform architecture, and integration patterns, with the ability to engage both technical and business stakeholders. While your primary focus will be on data architecture, experience in data governance frameworks or utilities analytics will be considered an added advantage. Your responsibilities include defining target-state data architectures and roadmaps aligned with enterprise and utilities industry best practices, conducting current-state assessments of data platforms, integration patterns, and data management processes. You will also be required to develop conceptual, logical, and physical data models for key domains such as customer, billing, asset, and metering. Furthermore, you will establish reference architectures for data ingestion, transformation, storage, and consumption in cloud/hybrid environments, collaborating with engineering teams to ensure designs are implementable, performant, and compliant. Partnering with business teams to translate requirements into scalable, governed data solutions and advising on technology selection for data platforms, integration, and analytics are crucial aspects of your role. In terms of professional and technical skills, you must have a strong background in enterprise data architecture and strategy definition, along with proven experience in the Utilities sector (electricity, gas, or water). Proficiency in conceptual, logical, and physical data modeling, experience with cloud-based data platforms (AWS, Azure, or GCP), data lakes, warehouses, and orchestration tools, as well as an understanding of cloud-native architectures, data lakehouses, and hybrid integration patterns are essential. Additionally, knowledge of data governance frameworks and tools (Collibra, Purview, Unity Catalog), experience in data product design and federated ownership models, familiarity with analytics in the utilities domain such as asset performance and risk modeling, and awareness of SCADA, OMS, PI Historian, and other operational systems are considered good to have skills. The ideal candidate for this role will combine deep Utilities domain knowledge with strong data architecture expertise. This client-facing position involves strategic architecture work and the ability to contribute to delivery in technical projects as required.,
Posted 1 month ago
2.0 - 6.0 years
0 Lacs
maharashtra
On-site
You will be responsible for building and maintaining secure, scalable data pipelines using Databricks and Azure. This includes handling ingestion from various sources such as files, APIs, and streaming, performing data transformation, and ensuring quality validation. Collaboration with subsystem data science and product teams will be essential to prepare for machine learning readiness. Your technical expertise should cover working with Notebooks (SQL, Python), Delta Lake, Unity Catalog, ADLS/S3, job orchestration, APIs, structured logging, and Infrastructure as Code (IaC) tools like Terraform. Delivery practices like Trunk-based development, Test-Driven Development (TDD), Git, and Continuous Integration/Continuous Deployment (CI/CD) for notebooks and pipelines are expected. You should also be familiar with integrating different data formats such as JSON, CSV, XML, Parquet, and various databases including SQL, NoSQL, and graph databases. Strong communication skills are vital to justify decisions, document architecture, and align with enabling teams. In this role, you will have the opportunity to engage in Proximity Talks, where you can interact with other designers, engineers, and product experts to enhance your knowledge. Working with a world-class team will enable you to constantly challenge yourself and acquire new skills on a daily basis. This is a contract position based in Abu Dhabi, and if relocation from India is necessary, the company will cover travel and accommodation expenses on top of your salary. Proximity is a globally recognized technology, design, and consulting partner for leading Sports, Media, and Entertainment companies. Headquartered in San Francisco, with offices in Palo Alto, Dubai, Mumbai, and Bangalore, Proximity has been instrumental in creating and scaling high-impact products used by 370 million daily users, with a combined net worth of $45.7 billion among client companies. As part of the Proximity team, you will work alongside a diverse group of coders, designers, product managers, and experts who solve complex challenges and develop cutting-edge technology at scale. The rapidly growing team of Proxonauts offers you the opportunity to make a significant impact on the company's success. You will have the chance to collaborate with experienced leaders who have led multiple tech, product, and design teams. To learn more about us, you can watch our CEO, Hardik Jagda, share insights about Proximity, explore our values and meet our team members, visit our website, blog, and Studio Proximity, and get an insider perspective on Instagram by following @ProxWrks and @H.Jagda.,
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |