Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 9.0 years
10 - 20 Lacs
Kochi
Work from Office
Greetings from Cognizant!! #MegaWalkIn We have an exciting opportunity for the #Databricks Developer Role with Cognizant, join us if you are an aspirant for matching the below criteria!! Primary Skill: Databricks Developer Experience: 4-9 years Job Location: PAN India Interview Day: 14 Jun 2025 - Saturday Interview Location: Kochi Interview mode: Walk-in Drive ( Cognizant Technology Solutions, Infopark Phase 2, Kakkanad, Kochi, Kerala 682030 ) Interested Candidates, Apply here >> https://forms.office.com/r/8SEVaz0XEy Regards, Vinosha TAG-HR
Posted 1 month ago
3.0 - 6.0 years
9 - 12 Lacs
Pune
Hybrid
Engineer - Investment Data Platform - 3+ Years - Pune We are hiring a skilled Engineer to join the Investment Data Platform in Pune in Financial services. If you're passionate about data software and engineering and delivering high-quality software solutions using Azure and .Net technologies, this opportunity is for you. Location: Pune Your Future Employer: Our client is a leading financial services firm with a global presence. They are committed to creating an inclusive and diverse workplace where all employees feel valued and have the opportunity to reach their full potential. Responsibilities: Developing and maintain software solutions aligned with business outcomes. Collaborating within agile teams to review user stories and implement features. Maintaining existing data platform artefacts and contribute to continuous improvement. Building scalable, robust software adhering to data engineering best practices. Supporting development of data ingestion, modeling, transformation, and deployment pipelines. Requirements: 3+ years of experience in software engineering and 2+ years in data engineering. Proficiency in C#, .Net Framework, SQL; exposure to Python, Java, PowerShell, or JavaScript. Experience with Azure Data Factory, CI/CD pipelines, and DevOps principles. Strong interpersonal and communication skills. Bachelor's degree in computer science, engineering, finance or related field What is in it for you: Join a high-performing team at a global investment leader Exposure to cutting-edge Azure data platform technologies Competitive compensation with hybrid work flexibility Reach us: If you think this role is aligned with your career, kindly write to me along with your updated CV at aayushi.goyal@crescendogroup.in for a confidential discussion on the role. Disclaimer: Crescendo Global specializes in Senior to C-level niche recruitment. We are passionate about empowering job seekers and employers with an engaging and memorable job search and leadership hiring experience. Crescendo Global does not discriminate based on race, religion, color, origin, gender, sexual orientation, age, marital status, veteran status, or disability status. Note: We receive a lot of applications daily, so it may not be possible to respond to each one individually. Please assume that your profile has not been shortlisted if you don't hear from us in a week. Thank you for your understanding. Scammers can misuse Crescendo Globals name for fake job offers. We never ask for money, purchases, or system upgrades. Verify all opportunities at www.crescendo-global.com and report fraud immediately. Stay alert! Profile Keywords: Azure Data Engineering, C# Developer, .Net Engineer, SQL Data Engineer, DevOps Data, Databricks Jobs, Data Ingestion Engineer, Financial Services Tech Jobs, Asset Management IT, Financial Services
Posted 1 month ago
8.0 - 13.0 years
20 - 35 Lacs
Hyderabad
Remote
Databricks Administrator Azure/AWS | Remote | 6+ Years Job Description: We are seeking an experienced Databricks Administrator with 6+ years of expertise in managing and optimizing Databricks environments. The ideal candidate should have hands-on experience with Azure/AWS Databricks , cluster management, security configurations, and performance optimization. This role requires close collaboration with data engineering and analytics teams to ensure smooth operations and scalability. Key Responsibilities: Deploy, configure, and manage Databricks workspaces, clusters, and jobs . Monitor and optimize Databricks performance, auto-scaling, and cost management . Implement security best practices , including role-based access control (RBAC) and encryption. Manage Databricks integration with cloud storage (Azure Data Lake, S3, etc.) and other data services . Automate infrastructure provisioning and management using Terraform, ARM templates, or CloudFormation . Troubleshoot Databricks runtime issues, job failures, and performance bottlenecks . Support CI/CD pipelines for Databricks workloads and notebooks. Collaborate with data engineering teams to enhance ETL pipelines and data processing workflows . Ensure compliance with data governance policies and regulatory requirements . Maintain and upgrade Databricks versions and libraries as needed. Required Skills & Qualifications: 6+ years of experience as a Databricks Administrator or in a similar role. Strong knowledge of Azure/AWS Databricks and cloud computing platforms . Hands-on experience with Databricks clusters, notebooks, libraries, and job scheduling . Expertise in Spark optimization, data caching, and performance tuning . Proficiency in Python, Scala, or SQL for data processing. Experience with Terraform, ARM templates, or CloudFormation for infrastructure automation. Familiarity with Git, DevOps, and CI/CD pipelines . Strong problem-solving skills and ability to troubleshoot Databricks-related issues. Excellent communication and stakeholder management skills. Preferred Qualifications: Databricks certifications (e.g., Databricks Certified Associate/Professional). Experience in Delta Lake, Unity Catalog, and MLflow . Knowledge of Kubernetes, Docker, and containerized workloads . Experience with big data ecosystems (Hadoop, Apache Airflow, Kafka, etc.). Email : Hrushikesh.akkala@numerictech.com Phone /Whatsapp : 9700111702 For immediate response and further opportunities, connect with me on LinkedIn: https://www.linkedin.com/in/hrushikesh-a-74a32126a/
Posted 1 month ago
5.0 - 9.0 years
10 - 20 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
We are looking for Azure Data Engineer's resources having minimum 5 to 9 years of Experience. To Apply, use the below link: https://career.infosys.com/jobdesc?jobReferenceCode=INFSYS-EXTERNAL-210775&rc=0 Role & responsibilities Blend of technical expertise with 5 to 9 year of experience , analytical problem-solving, and collaboration with cross-functional teams. Design and implement Azure data engineering solutions ( Ingestion & Curation ) Create and maintain Azure data solutions including Azure SQL Database, Azure Data Lake, and Azure Blob Storage. Design, implement, and maintain data pipelines for data ingestion, processing, and transformation in Azure. Utilizing Azure Data Factory or comparable technologies, create and maintain ETL (Extract, Transform, Load) operations Use Azure Data Factory and Databricks to assemble large, complex data sets Implementing data validation and cleansing procedures will ensure the quality, integrity, and dependability of the data. Ensure data quality / security and compliance. Optimize Azure SQL databases for efficient query performance. Collaborate with data engineers, and other stakeholders to understand requirements and translate them into scalable and reliable data platform architectures.
Posted 1 month ago
7.0 - 12.0 years
25 - 40 Lacs
Mohali
Work from Office
Overview Greystar is looking for dedicated and hard-working individuals who want to help us continue to be the best at what we do. Today, we are the largest rental housing operator and developer in the US and one of the largest global investment management companies, delivering industry-leading services to investors, clients, and residents. We offer unrivaled professional development and career growth opportunities to our team members and look forward to welcoming you to Greystar, where our people are what make us the Global Leader in Rental Housing. Job Responsibilities About the role: We are seeking a Senior Data Engineer skilled in Databricks, Python, Scala, Azure Synapse and Azure Data Factory to join our team of data engineers within Greystar Information Technology. This team serves Greystar by ingesting data from multiple sources, making it available to internal stakeholders, and by interfacing with and exchanging data between a variety of internal and external systems. You will be responsible for building and enhancing our Enterprise Data Platform (EDP) which is built within the Azure cloud and utilizes modern processes and technologies such as Databricks, Synapse, Azure Data Factory (ADF), ADLS Gen2 Data Lake, Azure DevOps and CI/CD pipelines. You will develop, deploy and troubleshoot complex data ingestion pipelines and processes. Your curious mind and attention to detail will be an asset, as will your extensive knowledge and experience in the data engineering space. JOB DESCRIPTION How you will make in impact: Design, develop, optimize, and maintain data architecture and pipelines that adhere to ETL principles and business goals Collaborate with data engineers, data consumers, and other team members to come up with simple, functional, and elegant solutions that balance the data needs across the organization Solve complex data problems to deliver insights that helps the organization achieve its goals Create data products that will be used throughout the organization Advise, consult, mentor and coach other data and analytic professionals on data standards and practices Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytic solutions Develop and deliver documentation on data engineering capabilities, standards, and processes; participate in coaching, mentoring, design reviews and code reviews Partner with business analysts and solutions architects to develop technical architectures for strategic enterprise projects and initiatives. Deliver awesome code Technical Qualifications: 7+ years relevant and progressive data engineering experience Deep Technical knowledge and experience in Databricks, Python, Scala, Microsoft Azure architecture and platform including Synapse, ADF (Azure Data Factory) pipelines and Synapse stored procedures Hands-on experience working with data pipelines using a variety of source and target locations (e.g., Databricks, Synapse, SQL Server, Data Lake, file-based, SQL and No-SQL database) Experience in engineering practices such as development, code refactoring, and leveraging design patterns, CI/CD, and building highly scalable data applications and processes Experience developing batch ETL pipelines; real-time pipelines are a plus Knowledge of advanced data engineering concepts such as dimensional modeling, ETL, data governance, data warehousing involving structured and unstructured data Thorough knowledge of Synapse and SQL Server including T-SQL and stored procedures Experience working with and supporting cross-functional teams in a dynamic environment A successful history of manipulating, processing and extracting value from large disconnected datasets. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Knowledge and understanding of Boomi is a plus Additional Qualifications and Experience: Excellent problem-solving skills and experience Effective communication skills Strong collaboration skills "Self-starter" attitude and the ability to make decisions with minimal guidance from others Innovative and passionate about your work and the work of your teammates Ability to comprehend and analyze operational systems and ask appropriate questions to determine how to improve, migrate or modify the solution to meet business needs Experience with data ingestion and engineering, specifically involving large data volumes Knowledge of CI/CD release pipelines is a plus Understanding of Python and knowledge of parallel processing frameworks like MapReduce, Spark, Scala Knowledge of the Agile development process Education: Bachelors degree in computer science, information technology, business management information systems, or equivalent experience.
Posted 1 month ago
2.0 - 7.0 years
6 - 16 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Exciting Azure developer Job Opportunity at Infosys! We are looking for skilled Azure Developers to join our dynamic team PAN INDIA. If you have a passion for technology and a minimum of 2 to 9 years of hands-on experience in azure development, this is your chance to make an impact. At Infosys, we value innovation, collaboration, and diversity. We believe that a diverse workforce drives creativity and fosters a richer company culture. Therefore, we strongly encourage applications from all genders and backgrounds. Ready to take your career to the next level? Join us in shaping the future of technology. Visit our careers page for more details on how to apply.
Posted 1 month ago
8.0 - 13.0 years
15 - 30 Lacs
Hyderabad
Work from Office
Role & responsibilities Details on tech stack databricks, python, pyspark, Snowflake, SQL Min requirements to the candidate Advanced SQL queries, scripts, stored procedures, materialized views, and views Focus on ELT to load data into database and perform transformations in database Ability to use analytical SQL functions Snowflake experience Cloud Data Warehouse solutions experience (Snowflake, Azure DW, or Redshift); data modeling, analysis, programming Experience with DevOps models utilizing a CI/CD tool Work in hands-on Cloud environment in Azure Cloud Platform (ADLS, Blob) Airflow GD Requirements Good interpersonal skills; comfort and competence in dealing with different teams within the organization. Requires an ability to interface with multiple constituent groups and build sustainable relationships. Strong and effective communication skills (verbal and written). Strong analytical, problem-solving skills. Experience of working in a matrix organization. Proactive problem solver. Ability to prioritize and deliver. Results-oriented, flexible, adaptable. Work well independently and lead a team. Versatile, creative temperament, ability to think out-of-the box while defining sound and practical solutions. Ability to master new skills. Familiar with Agile practices and methodologies Professional data engineering experience focused on batch and real-time data pipelines using Spark, Python, SQL Data warehouse (data modeling, programming) Experience working with Snowflake Experience working on a cloud environment, preferably, Microsoft Azure Cloud Data Warehouse solutions (Snowflake, Azure DW) Preferred candidate profile
Posted 1 month ago
10.0 - 15.0 years
11 - 15 Lacs
Hyderabad, Coimbatore
Work from Office
Azure+ SQL+ ADF+ Databricks +design+ Architecture( Mandate) Total experience in data management area for 10 + years with Azure cloud data platform experience Architect with Azure stack (ADLS, AALS, Azure Data Bricks, Azure Streaming Analytics Azure Data Factory, cosmos DB & Azure synapse) & mandatory expertise on Azure streaming Analytics, Data Bricks, Azure synapse, Azure cosmos DB Must have worked experience in large Azure Data platform and dealt with high volume Azure streaming Analytics Experience in designing cloud data platform architecture, designing large scale environments 5 plus Years of experience architecting and building Cloud Data Lake, specifically Azure Data Analytics technologies and architecture is desired, Enterprise Analytics Solutions, and optimising real time 'big data' data pipelines, architectures and data sets.
Posted 1 month ago
8.0 - 13.0 years
15 - 30 Lacs
Hyderabad
Hybrid
Job Description: Advanced SQL queries, scripts, stored procedures, materialized views, and views Focus on ELT to load data into database and perform transformations in database Ability to use analytical SQL functions Snowflake experience Cloud Data Warehouse solutions experience (Snowflake, Azure DW, or Redshift); data modeling, analysis, programming Experience with DevOps models utilizing a CI/CD tool Work in hands-on Cloud environment in Azure Cloud Platform (ADLS, Blob) Airflow Preferred candidate profile Good interpersonal skills; comfort and competence in dealing with different teams within the organization. Requires an ability to interface with multiple constituent groups and build sustainable relationships. Strong and effective communication skills (verbal and written). Strong analytical, problem-solving skills. Experience of working in a matrix organization. Proactive problem solver. Ability to prioritize and deliver. Results-oriented, flexible, adaptable. Work well independently and lead a team. Versatile, creative temperament, ability to think out-of-the box while defining sound and practical solutions. Ability to master new skills. Familiar with Agile practices and methodologies Professional data engineering experience focused on batch and real-time data pipelines using Spark, Python, SQL Data warehouse (data modeling, programming) Experience working with Snowflake Experience working on a cloud environment, preferably, Microsoft Azure Cloud Data Warehouse solutions (Snowflake, Azure DW)
Posted 1 month ago
3.0 - 6.0 years
4 - 7 Lacs
Chennai
Work from Office
Azure Data Factory Azure Databricks Azure SQL database Synapse Analytics Logic App Azure Functions Azure Analysis Service Active Directory Azure Devops Python Pyspark
Posted 1 month ago
5.0 - 8.0 years
0 - 20 Lacs
Hyderabad, Bengaluru
Work from Office
Roles and Responsibilities : Design, develop, and maintain large-scale data pipelines using Azure Data Factory (ADF) to extract, transform, and load data from various sources into Azure Databricks. Collaborate with cross-functional teams to understand business requirements and design scalable solutions for big data processing using PySpark on Azure Databricks. Develop complex SQL queries to optimize database performance and troubleshoot issues in Azure SQL databases. Ensure high availability of critical systems by implementing monitoring tools such as Prometheus and Grafana. Job Requirements : Experience in designing and developing large-scale data pipelines using ADF or similar technologies. Strong proficiency in Python programming language with experience working with libraries like Pandas, NumPy, etc. Experience working with Azure Databricks platform including creating clusters, managing workloads, and optimizing resource utilization. Proficiency in writing complex SQL queries for querying relational databases.
Posted 1 month ago
10.0 - 14.0 years
20 - 30 Lacs
Noida, Delhi / NCR
Work from Office
Solid understanding of data pipeline architecture, cloud infrastructure, and best practices in data engineering. Excellent problem-solving skills and attention to detail. Ability to work independently and collaborate effectively in a team environment. Skilled in independently analyzing large datasets, identifying discrepancies and inconsistencies, and recommending corrective actions. Demonstrated expertise in working with SQL Server, Oracle, Azure SQL Databases, and APIs. Experience with at least one programming language (Python, Java, C#, etc.). Hands-on experience with Azure Data Factory (ADF), Logic Apps, and Runbooks. Familiarity with the Azure cloud platform and PowerShell scripting. Strong problem-solving and analytical skills. Excellent communication and teamwork abilities, with experience engaging stakeholders at all levels. Capable of managing and adjusting to evolving priorities from multiple projects. Mandatory Skills SQL, Python, Apache Spark,Data Bricks, Azure Data Factory, SQL Server, Azure SQL Database, ETL, Powershell Scripting Desirable Skills SQL, Python, Apache Spark,Data Bricks, Azure Data Factory, SQL Server, Azure SQL Database, ETL, Powershell Scripting Role & responsibilities Preferred candidate profile
Posted 1 month ago
4.0 - 9.0 years
7 - 12 Lacs
Bengaluru
Work from Office
Lumen Technologies is a global technology company that delivers innovative communication and network solutions Our mission is to empower businesses and individuals to connect, grow, and thrive in the digital age With a focus on customer experience and operational excellence, we strive to provide cutting-edge solutions that meet the evolving needs of our customers We're looking for a Senior Data Analyst with a strong foundation in Azure-based data engineering and Machine Learning to design, develop, and optimize robust data pipelines, applications, and analytics infrastructure This role demands deep technical expertise, cross-functional collaboration, and the ability to align data solutions with dynamic business needs Key Responsibilities Data Pipeline Development: Design and implement efficient data pipelines using Azure Databricks with PySpark to transform and process large datasets Optimize data workflows for scalability, reliability, and performance Application Integration Collaborate with cross-functional teams to develop APIs using the dot net Framework for Azure Web Application integration Ensure smooth data exchange between applications and downstream systems Data Warehousing And Analytics Build and manage data warehousing solutions using Synapse Analytics and Azure Data Factory (ADF) Develop and maintain reusable and scalable data models to support business intelligence needs Automation And Orchestration Utilize Azure Logic Apps, Function Apps, and Azure DevOps to automate workflows and streamline deployments Implement CI/CD pipelines for efficient code deployment and testing Infrastructure Management Oversee Azure infrastructure management and maintenance, ensuring a secure and optimized environment Provide support for performance tuning and capacity planning Business Alignment Gain a deep understanding of AMO data sources and their business implications Work closely with stakeholders to provide customized solutions aligning with business needs BAU Support Monitor and support data engineering workflows and application functionality in BAU mode Troubleshoot and resolve production issues promptly to ensure business continuity Technical Expertise Proficiency in Microsoft SQL for complex data queries and database management Advanced knowledge of Azure Databricks and PySpark for data engineering and ETL processes Experience with Azure Data Factory (ADF) for orchestrating data workflows Expertise in Azure Synapse Analytics for data integration and analytics Proficiency in dot net Framework for API development and integration Cloud And DevOps Skills Strong experience in Azure Infrastructure Management and optimization Hands-on knowledge of Azure Logic Apps, Function Apps, and Azure DevOps for CI/CD automation "We are an equal opportunity employer committed to fair and ethical hiring practices We do not charge any fees or accept any form of payment from candidates at any stage of the recruitment process If anyone claims to offer employment opportunities in our company in exchange for money or any other benefit, please treat it as fraudulent and report it immediately " Show more Show less
Posted 1 month ago
3.0 - 5.0 years
4 - 8 Lacs
Pune
Work from Office
Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build whats next for their businesses. Your Role Has data pipeline implementation experience with any of these cloud providers - AWS, Azure, GCP. Experience with cloud storage, cloud database, cloud data warehousing and Data Lake solutions like Snowflake, Big query, AWS Redshift, ADLS, S3. Has good knowledge of cloud compute services and load balancing. Has good knowledge of cloud identity management, authentication and authorization. Proficiency in using cloud utility functions such as AWS lambda, AWS step functions, Cloud Run, Cloud functions, Azure functions. Experience in using cloud data integration services for structured, semi structured and unstructured data such as Azure Databricks, Azure Data Factory, Azure Synapse Analytics, AWS Glue, AWS EMR, Dataflow, Dataproc. Your Profile Good knowledge of Infra capacity sizing, costing of cloud services to drive optimized solution architecture, leading to optimal infra investment vs performance and scaling. Able to contribute to making architectural choices using various cloud services and solution methodologies. Expertise in programming using python. Very good knowledge of cloud Dev-ops practices such as infrastructure as code, CI/CD components, and automated deployments on cloud. Must understand networking, security, design principles and best practices in cloud. What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of 22.5 billion.
Posted 1 month ago
5.0 - 10.0 years
5 - 15 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Important Points to be noted here before you apply for this job:- 1. This is a 3rd Party Pay-roll opportunity, wherein, the Consultant will be on pay-rolls of Zentek Infosoft and working for the end client of Infosys. Apply ONLY if you are fine to work as a Sub Contractor on 3rd Party Pay-roll . 2. This is a LONG-TERM Opportunity with a Strong Chance of getting converted as Full Time Opportunity with Infosys after few months . 3. One round of Face-to-face Interview and Client Interview is a MUST for this role. If you cannot go for F2F interview, please DO NOT apply. 4. This is a Work from Office opportunity and person is expected to work from any Development Center of Infosys - Bangalore, Pune, Chennai, Hyderabad, Noida etc. This is NOT a Hybrid or WFH opportunity . Technology Lead - 6 Positions MUST HAVE SKILLS - Azure Data Factory + Data Bricks + SQL Total Exp > 8+ Years; Relevant > 5 Years Abhishek.Sharma@ZentekInfosoft.com
Posted 1 month ago
7.0 - 12.0 years
0 - 2 Lacs
Chennai
Work from Office
candidate must have 3 year relevant experience in this following key skills azure data bricks Azure data factory python SQL only we looking immediate joiners, even only give first priority for tier one companies only chennai loaction available Required Candidate profile Azure Data Engineer, Azure Solutions Architect , azure data factory
Posted 1 month ago
15.0 - 20.0 years
18 - 22 Lacs
Hyderabad
Work from Office
Project Role : Data Platform Architect Project Role Description : Architects the data platform blueprint and implements the design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Microsoft Azure Data Services Good to have skills : Microsoft SQL Server, Python (Programming Language), Microsoft Azure DatabricksMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Architect, you will be responsible for architecting the data platform blueprint and implementing the design, which includes various data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure seamless integration between systems and data models, while also addressing any challenges that arise during the implementation process. You will engage in discussions with stakeholders to gather requirements and provide insights that drive the overall architecture of the data platform, ensuring it meets the needs of the organization effectively. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Develop and maintain documentation related to architecture and design decisions. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Data Services.- Good To Have Skills: Experience with Microsoft Azure Databricks, Microsoft SQL Server, Python (Programming Language).- Strong understanding of data architecture principles and best practices.- Experience with cloud-based data solutions and services.- Familiarity with data governance and compliance standards. Additional Information:- The candidate should have minimum 5 years of experience in Microsoft Azure Data Services.- This position is based in Hyderabad.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 month ago
15.0 - 20.0 years
18 - 22 Lacs
Hyderabad
Work from Office
Project Role : Data Platform Architect Project Role Description : Architects the data platform blueprint and implements the design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Microsoft Azure Data Services Good to have skills : Microsoft SQL Server, Python (Programming Language), Microsoft Azure DatabricksMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Architect, you will be responsible for architecting the data platform blueprint and implementing the design, which includes various data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure seamless integration between systems and data models, while also addressing any challenges that arise during the implementation process. You will engage in discussions with stakeholders to gather requirements and provide insights that drive the overall architecture of the data platform, ensuring it meets the needs of the organization effectively. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Develop and maintain documentation related to data architecture and design. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Data Services.- Good To Have Skills: Experience with Microsoft Azure Databricks, Python (Programming Language), Microsoft SQL Server.- Strong understanding of data modeling techniques and best practices.- Experience with cloud-based data storage solutions and data processing frameworks.- Familiarity with data governance and compliance standards. Additional Information:- The candidate should have minimum 7.5 years of experience in Microsoft Azure Data Services.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 month ago
1.0 - 3.0 years
2 - 6 Lacs
Indore, Pune, Bengaluru
Work from Office
LocationsPune, Bangalore, Indore Work modeWork from Office Informatica data quality - idq Azure databricks Azure data lake Azure Data Factory Api integration
Posted 1 month ago
6.0 - 11.0 years
19 - 27 Lacs
Haryana
Work from Office
About Company Job Description Key responsibilities: 1. Understand, implement, and automate ETL pipelines with better industry standards 2. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, design infrastructure for greater scalability, etc 3. Developing, integrating, testing, and maintaining existing and new applications 4. Design, and create data pipelines (data lake / data warehouses) for real world energy analytical solutions 5. Expert-level proficiency in Python (preferred) for automating everyday tasks 6. Strong understanding and experience in distributed computing frameworks, particularly Spark, Spark-SQL, Kafka, Spark Streaming, Hive, Azure Databricks etc 7. Limited experience in using other leading cloud platforms preferably Azure. 8. Hands on experience on Azure data factory, logic app, Analysis service, Azure blob storage etc. 9. Ability to work in a team in an agile setting, familiarity with JIRA and clear understanding of how Git works 10. Must have 5-7 years of experience
Posted 1 month ago
2.0 - 5.0 years
13 - 17 Lacs
Pune
Work from Office
As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Strong MS SQL, Azure Databricks experience Implement and manage data models in DBT, data transformation and alignment with business requirements. Ingest raw, unstructured data into structured datasets to cloud object store. Utilize DBT to convert raw, unstructured data into structured datasets, enabling efficient analysis and reporting. Write and optimize SQL queries within DBT to enhance data transformation processes and improve overall performance Preferred technical and professional experience Establish best DBT processes to improve performance, scalability, and reliability. Design, develop, and maintain scalable data models and transformations using DBT in conjunction with Databricks Proven interpersonal skills while contributing to team effort by accomplishing related results as required
Posted 1 month ago
7.0 - 12.0 years
14 - 18 Lacs
Mumbai
Work from Office
Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. 7+ Yrs total experience in Data Engineering projects & 4+ years of relevant experience on Azure technology services and Python Azure Azure data factory, ADLS- Azure data lake store, Azure data bricks, Mandatory Programming languages Py-Spark, PL/SQL, Spark SQL Database SQL DB Experience with AzureADLS, Databricks, Stream Analytics, SQL DW, COSMOS DB, Analysis Services, Azure Functions, Serverless Architecture, ARM Templates Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. Experience with object-oriented/object function scripting languagesPython, SQL, Scala, Spark-SQL etc. Data Warehousing experience with strong domain Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Intuitive individual with an ability to manage change and proven time management Proven interpersonal skills while contributing to team effort by accomplishing related results as needed Up-to-date technical knowledge by attending educational workshops, reviewing publications Preferred technical and professional experience Experience with AzureADLS, Databricks, Stream Analytics, SQL DW, COSMOS DB, Analysis Services, Azure Functions, Serverless Architecture, ARM Templates Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. Experience with object-oriented/object function scripting languagesPython, SQL, Scala, Spark-SQL etc.
Posted 1 month ago
6.0 - 7.0 years
5 - 9 Lacs
Navi Mumbai
Work from Office
Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total Exp-6-7 Yrs (Relevant-4-5 Yrs) Mandatory Skills: Azure Databricks, Python/PySpark, SQL, Github, - Azure Devops - Azure Blob Ability to use programming languages like Java, Python, Scala, etc., to build pipelines to extract and transform data from a repository to a data consumer Ability to use Extract, Transform, and Load (ETL) tools and/or data integration, or federation tools to prepare and transform data as needed. Ability to use leading edge tools such as Linux, SQL, Python, Spark, Hadoop and Java Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences
Posted 1 month ago
6.0 - 7.0 years
14 - 18 Lacs
Kochi
Work from Office
As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise seach applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total Exp-6-7 Yrs (Relevant-4-5 Yrs) Mandatory Skills: Azure Databricks, Python/PySpark, SQL, Github, - Azure Devops - Azure Blob Ability to use programming languages like Java, Python, Scala, etc., to build pipelines to extract and transform data from a repository to a data consumer Ability to use Extract, Transform, and Load (ETL) tools and/or data integration, or federation tools to prepare and transform data as needed. Ability to use leading edge tools such as Linux, SQL, Python, Spark, Hadoop and Java Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences
Posted 1 month ago
5.0 - 10.0 years
14 - 18 Lacs
Bengaluru
Work from Office
As an Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise We are seeking a skilled Azure Data Engineer with 5+ years of experience Including 3+ years of hands-on experience with ADF/Databricks The ideal candidate Data bricks,Data Lake, Phyton programming skills. The candidate will also have experience for deploying to data bricks. Familiarity with Azure Data Factory Preferred technical and professional experience Good communication skills. 3+ years of experience with ADF/DB/DataLake. Ability to communicate results to technical and non-technical audiences
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39815 Jobs | Dublin
Wipro
19317 Jobs | Bengaluru
Accenture in India
15105 Jobs | Dublin 2
EY
14860 Jobs | London
Uplers
11139 Jobs | Ahmedabad
Amazon
10431 Jobs | Seattle,WA
IBM
9214 Jobs | Armonk
Oracle
9174 Jobs | Redwood City
Accenture services Pvt Ltd
7676 Jobs |
Capgemini
7672 Jobs | Paris,France