Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 13.0 years
20 - 35 Lacs
Hyderabad
Work from Office
We are looking for an experienced Azure Data Engineer with 2+ years of hands-on experience in Azure Data Lake and Azure Data Factory. The ideal candidate will have a strong background in connecting data sources to the Data Lake, writing PiSpark SQL codes, and building SSIS packages. Additionally, experience in data architecture, data modeling, and creating visualizations is essential. Key Responsibilities : Work with Azure Data Lake and Azure Data Factory to design, implement, and manage data pipelines. Connect various data sources (applications, databases, etc.) to the Azure Data Lake for storage and processing. Write PiSpark SQL codes and SSIS packages for data retrieval and transformation from different data sources. Design and develop efficient Data Architecture and Data Modeling solutions to support business requirements. Create data visualizations to communicate insights to stakeholders and decision-makers. Optimize data workflows and pipelines for better performance and scalability. Collaborate with cross-functional teams to ensure seamless data integration and delivery. Ensure data integrity, security, and compliance with best practices. Skills and Qualifications : 2+ years of experience working with Azure Data Lake, Azure Data Factory, and related Azure services. Proficiency in writing PiSpark SQL codes for data extraction and transformation. Experience in developing SSIS packages for data integration and automation. Strong understanding of Data Architecture and Data Modeling concepts. Experience in creating effective and insightful data visualizations using tools like Power BI or similar. Familiarity with cloud-based storage and computing concepts and best practices. Strong problem-solving skills with an ability to troubleshoot and optimize data workflows. Ability to collaborate effectively in a team environment and communicate with stakeholders. Preferred Qualifications : Certifications in Azure (e.g., Azure Data Engineer or similar) would be a plus. Experience with other Azure tools like Azure Synapse, Databricks, etc.
Posted 1 month ago
4.0 - 8.0 years
10 - 20 Lacs
Gurugram
Remote
The Role: We are seeking a Cloud Infrastructure and DevOps Engineer with a strong background in designing, automating, and maintaining secure, scalable cloud environments on Microsoft Azure. The ideal candidate will have hands-on experience with Azure Data Lake, Azure Data Factory, and Databricks, and will play a key role in supporting data platform operations through infrastructure as-code, CI/CD automation, and monitoring best practices. Key Responsibilities: Design and implement cloud infrastructure to support data engineering workloads across Azure Data Lake, Azure Data Factory, and Databricks. Develop and maintain infrastructure-as-code using tools like Terraform. Automate build, release, and deployment pipelines using Azure DevOps or GitHub Actions. Set up and maintain monitoring, alerting, and logging for Azure data services to ensure performance and reliability. Manage role-based access control (RBAC), service principals, and security configurations for Azure resources. Ensure high availability, disaster recovery, and backup configurations are in place across critical workloads. Collaborate with data engineers and architects to optimise pipeline orchestration and resource provisioning. Implement governance, cost optimization, and compliance across cloud environments. Provide ongoing support and enhancements post-deployment. Qualifications And Experience: Strong hands-on experience with Microsoft Azure, especially deploying and managing Azure Data Lake, Azure Data Factory (ADF), and Azure Databricks. Proficiency in infrastructure-as-code (IaC) using tools such as Terraform. Experience building CI/CD pipelines in Azure DevOps or equivalent tools (GitHub Actions, Jenkins). Knowledge of containerization (Docker), and orchestration (Kubernetes or Azure Kubernetes Service - AKS) is advantageous. Familiarity with Azure networking, identity and access management, and security best practices. Comfortable working with scripting languages like PowerShell, Bash, or Python. Proven ability to analyse complex problems and deliver practical solutions Strong written and verbal communication skills to interact with both technical and non- technical stakeholders CERTIFICATIONS: Microsoft Certified: Azure Administrator Associate (AZ-104) Highly preferred Microsoft Certified: DevOps Engineer Expert (AZ-400) Highly desirable
Posted 1 month ago
0.0 - 5.0 years
0 Lacs
Pune
Remote
The candidate must be proficient in Python, libraries and frameworks. Good with Data Modeling, Pyspark, MySQL concepts, Power BI, AWS, Azure concepts Experience in optimizing large transactional DBs Data, visualization tools, Databricks, fast API.
Posted 1 month ago
7.0 - 12.0 years
16 - 31 Lacs
Pune, Delhi / NCR, Mumbai (All Areas)
Hybrid
Job Title: Lead Data Engineer Job Summary The Lead Data Engineer will provide technical expertise in analysis, design, development, rollout and maintenance of data integration initiatives. This role will contribute to implementation methodologies and best practices, as well as work on project teams to analyse, design, develop and deploy business intelligence / data integration solutions to support a variety of customer needs. This position oversees a team of Data Integration Consultants at various levels, ensuring their success on projects, goals, trainings and initiatives though mentoring and coaching. Provides technical expertise in needs identification, data modelling, data movement and transformation mapping (source to target), automation and testing strategies, translating business needs into technical solutions with adherence to established data guidelines and approaches from a business unit or project perspective whilst leveraging best fit technologies (e.g., cloud, Hadoop, NoSQL, etc.) and approaches to address business and environmental challenges Works with stakeholders to identify and define self-service analytic solutions, dashboards, actionable enterprise business intelligence reports and business intelligence best practices. Responsible for repeatable, lean and maintainable enterprise BI design across organizations. Effectively partners with client team. Leadership not only in the conventional sense, but also within a team we expect people to be leaders. Candidate should elicit leadership qualities such as Innovation, Critical thinking, optimism/positivity, Communication, Time Management, Collaboration, Problem-solving, Acting Independently, Knowledge sharing and Approachable. Responsibilities: Design, develop, test, and deploy data integration processes (batch or real-time) using tools such as Microsoft SSIS, Azure Data Factory, Databricks, Matillion, Airflow, Sqoop, etc. Create functional & technical documentation e.g. ETL architecture documentation, unit testing plans and results, data integration specifications, data testing plans, etc. Provide a consultative approach with business users, asking questions to understand the business need and deriving the data flow, conceptual, logical, and physical data models based on those needs. Perform data analysis to validate data models and to confirm ability to meet business needs. May serve as project or DI lead, overseeing multiple consultants from various competencies Stays current with emerging and changing technologies to best recommend and implement beneficial technologies and approaches for Data Integration Ensures proper execution/creation of methodology, training, templates, resource plans and engagement review processes Coach team members to ensure understanding on projects and tasks, providing effective feedback (critical and positive) and promoting growth opportunities when appropriate. Coordinate and consult with the project manager, client business staff, client technical staff and project developers in data architecture best practices and anything else that is data related at the project or business unit levels Architect, design, develop and set direction for enterprise self-service analytic solutions, business intelligence reports, visualisations and best practice standards. Toolsets include but not limited to: SQL Server Analysis and Reporting Services, Microsoft Power BI, Tableau and Qlik. Work with report team to identify, design and implement a reporting user experience that is consistent and intuitive across environments, across report methods, defines security and meets usability and scalability best practices. Required Qualifications: 10 Years industry implementation experience with data integration tools such as AWS services Redshift, Athena, Lambda, Glue, S3, ETL, etc. 5-8 years of management experience required 5-8 years consulting experience preferred Minimum of 5 years of data architecture, data modelling or similar experience Bachelor’s degree or equivalent experience, Master’s Degree Preferred Strong data warehousing, OLTP systems, data integration and SDLC Strong experience in orchestration & working experience cloud native / 3rd party ETL data load orchestration Understanding and experience with major Data Architecture philosophies (Dimensional, ODS, Data Vault, etc.) Understanding of on premises and cloud infrastructure architectures (e.g. Azure, AWS, GCP) Strong experience in Agile Process (Scrum cadences, Roles, deliverables) & working experience in either Azure DevOps, JIRA or Similar with Experience in CI/CD using one or more code management platforms Strong databricks experience required to create notebooks in pyspark Experience using major data modelling tools (examples: ERwin, ER/Studio, PowerDesigner, etc.) Experience with major database platforms (e.g. SQL Server, Oracle, Azure Data Lake, Hadoop, Azure Synapse/SQL Data Warehouse, Snowflake, Redshift etc.) Strong experience in orchestration & working experience in either Data Factory or HDInsight or Data Pipeline or Cloud composer or Similar Understanding and experience with major Data Architecture philosophies (Dimensional, ODS, Data Vault, etc.) Understanding of modern data warehouse capabilities and technologies such as real-time, cloud, Big Data. Understanding of on premises and cloud infrastructure architectures (e.g. Azure, AWS, GCP) Strong experience in Agile Process (Scrum cadences, Roles, deliverables) & working experience in either Azure DevOps, JIRA or Similar with Experience in CI/CD using one or more code management platforms 3-5 years’ development experience in decision support / business intelligence environments utilizing tools such as SQL Server Analysis and Reporting Services, Microsoft’s Power BI, Tableau, looker etc. Preferred Skills & Experience: Knowledge and working experience with Data Integration processes, such as Data Warehousing, EAI, etc. Experience in providing estimates for the Data Integration projects including testing, documentation, and implementation Ability to analyse business requirements as they relate to the data movement and transformation processes, research, evaluation and recommendation of alternative solutions. Ability to provide technical direction to other team members including contractors and employees. Ability to contribute to conceptual data modelling sessions to accurately define business processes, independently of data structures and then combines the two together. Proven experience leading team members, directly or indirectly, in completing high-quality major deliverables with superior results Demonstrated ability to serve as a trusted advisor that builds influence with client management beyond simply EDM. Can create documentation and presentations such that the they “stand on their own” Can advise sales on evaluation of Data Integration efforts for new or existing client work. Can contribute to internal/external Data Integration proof of concepts. Demonstrates ability to create new and innovative solutions to problems that have previously not been encountered. Ability to work independently on projects as well as collaborate effectively across teams Must excel in a fast-paced, agile environment where critical thinking and strong problem solving skills are required for success Strong team building, interpersonal, analytical, problem identification and resolution skills Experience working with multi-level business communities Can effectively utilise SQL and/or available BI tool to validate/elaborate business rules. Demonstrates an understanding of EDM architectures and applies this knowledge in collaborating with the team to design effective solutions to business problems/issues. Effectively influences and, at times, oversees business and data analysis activities to ensure sufficient understanding and quality of data. Demonstrates a complete understanding of and utilises DSC methodology documents to efficiently complete assigned roles and associated tasks. Deals effectively with all team members and builds strong working relationships/rapport with them. Understands and leverages a multi-layer semantic model to ensure scalability, durability, and supportability of the analytic solution. Understands modern data warehouse concepts (real-time, cloud, Big Data) and how to enable such capabilities from a reporting and analytic stand-point. Demonstrated ability to serve as a trusted advisor that builds influence with client management beyond simply EDM.
Posted 1 month ago
4.0 - 9.0 years
10 - 20 Lacs
Kolkata, Pune, Bengaluru
Work from Office
About Client Hiring for One of the Most Prestigious Multinational Corporations! Job Description Job Title : Azure Data Engineer Qualification : Any Graduate or above Relevant Experience : 4 to 10 yrs Must Have Skills : Azure, ADB, PySpark Roles and Responsibilites: Strong experience in implementation and management of lake House using Databricks and Azure Tech stack (ADLS Gen2, ADF, Azure SQL) . Strong hands-on expertise with SQL, Python, Apache Spark and Delta Lake. Proficiency in data integration techniques, ETL processes and data pipeline architectures. Demonstrable experience using GIT and building CI/CD pipelines for code management. Develop and maintain technical documentation for the platform. Ensure the platform is developed with software engineering, data analytics and data security practices in mind. Developing and optimizing data processing and data storage systems, ensuring high performance, reliability, and security. Experience working in Agile Methodology and well-knowledgeable in using ADO Boards for Sprint deliveries. Excellent communication skills and able to communicate clearly technical and business concepts both verbally and in writing. Ability to work in a team environment and collaborate with all the levels effectively by sharing ideas and knowledge. Location : Kolkata, Pune, Mumbai, Bangalore, BBSR Notice period : Immediate / 90 days Shift Timing : General Shift Mode of Interview : Virtual Mode of Work : WFO Thanks & Regards Bhavana B Black and White Business Solutions Pvt.Ltd. Bangalore,Karnataka,India. Direct Number:8067432454 bhavana.b@blackwhite.in |www.blackwhite.in
Posted 1 month ago
4.0 - 9.0 years
20 - 30 Lacs
Pune, Bengaluru
Hybrid
Job role & responsibilities:- Understanding operational needs by collaborating with specialized teams Supporting key business operations. This involves in supporting architecture designing and improvements ,Understanding Data integrity and Building Data Models, Designing and implementing agile, scalable, and cost efficiency solution. Lead a team of developers, implement Sprint planning and executions to ensure timely deliveries Technical Skills, Qualification and experience required:- Proficient in Data Modelling 5-10 years of experience in Data Modelling. Exp in Data modeling tools ( Tool - Erwin). building ER diagram Hands on experince into ERwin / Visio tool Hands-on Expertise in Entity Relationship, Dimensional and NOSQL Modelling Familiarity with manipulating dataset using using Python. Exposure of Azure Cloud Services (Azure DataFactory, Azure Devops and Databricks) Exposure to UML Tool like Ervin/Visio Familiarity with tools such as Azure DevOps, Jira and GitHub Analytical approaches using IE or other common Notations Strong hands-on experience on SQL Scripting Bachelors/Master's Degree in Computer Science or related field Experience leading agile scrum, sprint planning and review sessions Good communication and interpersonal skills Good communication skills to coordinate between business stakeholders & engineers Strong results-orientation and time management True team player who is comfortable working in a global team Ability to establish relationships with stakeholders quickly in order to collaborate on use cases Autonomy, curiosity and innovation capability Comfortable working in a multidisciplinary team within a fast-paced environment * Immediate Joiners will be preferred only Outstation candidate's will not be considered
Posted 1 month ago
5.0 - 8.0 years
22 - 25 Lacs
Pune, Gurugram, Bengaluru
Hybrid
Key Responsibilities: Design and develop ETL/ELT pipelines using Azure Data Factory , Snowflake , and DBT . Build and maintain data integration workflows from various data sources to Snowflake. Write efficient and optimized SQL queries for data extraction and transformation. Work with stakeholders to understand business requirements and translate them into technical solutions. Monitor, troubleshoot, and optimize data pipelines for performance and reliability. Maintain and enforce data quality, governance, and documentation standards. Collaborate with data analysts, architects, and DevOps teams in a cloud-native environment. Must-Have Skills: Strong experience with Azure Cloud Platform services. Proven expertise in Azure Data Factory (ADF) for orchestrating and automating data pipelines. Proficiency in SQL for data analysis and transformation. Hands-on experience with Snowflake and SnowSQL for data warehousing. Practical knowledge of DBT (Data Build Tool) for transforming data in the warehouse. Experience working in cloud-based data environments with large-scale datasets. Good-to-Have Skills: Experience with Azure Data Lake , Azure Synapse , or Azure Functions . Familiarity with Python or PySpark for custom data transformations. Understanding of CI/CD pipelines and DevOps for data workflows. Exposure to data governance , metadata management , or data catalog tools. Knowledge of business intelligence tools (e.g., Power BI, Tableau) is a plus. Qualifications: Bachelors or Masters degree in Computer Science, Data Engineering, Information Systems, or a related field. 5+ years of experience in data engineering roles using Azure and Snowflake. Strong problem-solving, communication, and collaboration skills.
Posted 1 month ago
5.0 - 10.0 years
9 - 19 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Key Responsibilities: ¢ Work on client projects to deliver AWS, PySpark, Databricks based Data engineering & Analytics solutions. €¢ Build and operate very large data warehouses or data lakes. €¢ ETL optimization, designing, coding, & tuning big data processes using Apache Spark. €¢ Build data pipelines & applications to stream and process datasets at low latencies. €¢ Show efficiency in handling data - tracking data lineage, ensuring data quality, and improving discoverability of data. Technical Experience: €¢ Minimum of 5 years of experience in Databricks engineering solutions on AWS Cloud platforms using PySpark, Databricks SQL, Data pipelines using Delta Lake. €¢ Minimum of 5 years of experience years of experience in ETL, Big Data/Hadoop and data warehouse architecture & delivery. Email at- maya@mounttalent.com
Posted 1 month ago
5.0 - 10.0 years
10 - 20 Lacs
Bengaluru
Remote
Role & responsibilities Looking for a Data Engineer experienced with Databricks and strong proficiency in PySpark. Must have hands-on experience with Oracle or other relational databases. Proficient in Python, with awareness of web frameworks like Flask or Streamlit. Ability to build scalable data pipelines and support data-driven applications.
Posted 1 month ago
6.0 - 10.0 years
20 - 25 Lacs
Pune, Mumbai (All Areas)
Work from Office
Position: Data Engineer Experience: 6 +yrs. Job Location: Pune / Mumbai Job Profile Summary- Azure Databricks and Hands on Pyspark with tuning Azure Data Factory pipelines for various data loading into ADB, perf tuning Azure Synapse Azure Monitoring and Log Analytics ( error handling in ADF pipelines and ADB) Logic Apps and Functions Performance Tuning Databricks, Datafactory and Synapse Databricks data loading (layers ) and Export (which connection options, which best approach for report and access for fast)
Posted 1 month ago
5.0 - 10.0 years
13 - 18 Lacs
Bengaluru
Work from Office
Date 1 Jun 2025 Location: Bangalore, KA, IN Company Alstom At Alstom, we understand transport networks and what moves people. From high-speed trains, metros, monorails, and trams, to turnkey systems, services, infrastructure, signalling and digital mobility, we offer our diverse customers the broadest portfolio in the industry. Every day, 80,000 colleagues lead the way to greener and smarter mobility worldwide, connecting cities as we reduce carbon and replace cars. Could you be the full-time JIVS Expert in our IS&T/Processes Solutions Architecture team were looking for Your future role Take on a new challenge and apply your extensive expertise in Azure Blob Storage and JIVS technology in a new cutting-edge field. Youll work alongside innovative, collaborative, and solution-focused teammates. You'll lead the optimization of data management and migration strategies, ensuring seamless transitions of data and maintaining database integrity. Day-to-day, youll work closely with teams across the business (such as Business Stakeholders, IT Infrastructure, and Business Solutions), collaborate with partners relevant to Archiving Projects Delivery, and much more. Youll specifically take care of developing and implementing JIVS solutions, monitoring and maintaining the performance of JIVS applications, and utilizing Azure Blob Storage for efficient data management. Well look to you for: Designing and managing JIVS solutions that align with organizational goals Collaborating with cross-functional teams to analyze system requirements Ensuring the reliability and scalability of JIVS applications Administering and maintaining database systems for high availability and security Executing data migration projects with precision Managing the decommissioning of applications, including data extraction and transfer Creating and maintaining comprehensive documentation All about you We value passion and attitude over experience. Thats why we dont expect you to have every single skill. Instead, weve listed some that we think will help you succeed and grow in this role: Bachelor's degree in Computer Science, Information Technology, or related field Experience or understanding of JIVS implementations and management Knowledge of Azure Blob Storage services and best practices Familiarity with scripting languages and tools in JIVS and Azure environments A certification in database technologies or cloud database solutions is a plus Excellent problem-solving skills and collaborative teamwork abilities Strong communication skills, both verbal and written Things youll enjoy Join us on a life-long transformative journey the rail industry is here to stay, so you can grow and develop new skills and experiences throughout your career. Youll also: Enjoy stability, challenges and a long-term career free from boring daily routines Work with new security standards for rail signalling Collaborate with transverse teams and helpful colleagues Contribute to innovative projects that shape the future of mobility Utilise our flexible and dynamic working environment Steer your career in whatever direction you choose across functions and countries Benefit from our investment in your development, through award-winning learning opportunities Progress towards leadership and specialized technical roles Benefit from a fair and dynamic reward package that recognises your performance and potential, plus comprehensive and competitive social coverage (life, medical, pension) You dont need to be a train enthusiast to thrive with us. We guarantee that when you step onto one of our trains with your friends or family, youll be proud. If youre up for the challenge, wed love to hear from you! Important to note As a global business, were an equal-opportunity employer that celebrates diversity across the 63 countries we operate in. Were committed to creating an inclusive workplace for everyone.
Posted 1 month ago
5.0 - 7.0 years
11 - 14 Lacs
Bengaluru
Work from Office
3 - 5 years of analytics experience in Retail/CPG industry covering at least 2 of the below mentioned areas: Customer (insights/trends) Campaigns/CRM Loyalty Marketing To define, develop & support analytical solutions, along with extracting insights from data for improving business decisions of various retail functions (Ex. Marketing, Merchandising, Retail etc.) The recruit will be directly working with the on-shore Analytics team and with key business stakeholders of retail functions in each business unit to understand their business problems around merchandising and help deliver data-driven decision making. Analyze data o evaluate existing decisions framework and apply analytical techniques to discover meaningful patterns. Develop and support sophisticated & innovative analytical solutions that generate actionable insights by utilizing diverse information. Provide high-end consulting to functional teams to help them sharpen their business strategy. Keep abreast of industry trends and emerging methodologies to continuously improve skill set. Contribute to knowledge sharing and improve team productivity through training/documentation of best practices. Exposure to Azure Databricks (Good to have) Experience in Power BI (Good to have) Desired Skills and Experience: Bachelors or masters degree in any quantitative discipline (B.Tech/M.Tech in Computer Science/IT preferred) Candidates from Economics, Statistics/Data Science or Operations Research background from reputable institutions would also be preferred Ability to handle large datasets and expertise in analytical tools such as SAS, SQL, R, Python, Spark etc. Expertise in analytical techniques such as Linear Regression, Logistic Regression, Cluster analysis, Market Basket Analysis, Product Bundling, Cross/Upsell Analysis etc. Strong MS office skills and data visualization competence Excellent verbal and written presentation skills Should be able to articulate thoughts and ideas properly (structured approach should be there for problem solving) Attention to detail and ability to work in high pressure environment. Strong Drive and passion to deliver business impact through retail analytics. Strong business acumen and ability to translate data insights into meaningful business recommendations. Open to Travel (up to 20%) on need basis
Posted 1 month ago
1.0 - 4.0 years
10 - 15 Lacs
Mumbai
Work from Office
Overview We are building cutting-edge software and data workflows to identify and analyze the exposure and impact to climate change and financially relevant ESG risks. We leverage artificial intelligence (AI) and alternative data to deliver dynamic investment-relevant insights to power your investment decisions. Clients from across the capital ecosystem use our integrated data, analytical tools, indexes and insights for a clear view of the impact of ESG and Climate risks to their investment portfolios. We are seeking an outstanding Software Engineer to join our ESG&Climate Application Development team in the Pune/Mumbai or Budapest offices. As part of a global team you will collaborate in in cross-functional teams to build and improve our industry-leading ESG and Climate solutions. Responsibilities Design, develop, test, and maintain software applications to meet project requirements. Collaborate with product managers and other stakeholders to gather and refine requirements Participate in code reviews to maintain high coding standards and best practices. Troubleshoot and debug applications to resolve issues and improve performance. Document software designs, architectures, and processes for future reference. Support deployment and integration activities to ensure smooth implementation of software solutions Qualifications Expected: Bachelor’s degree in computer science, Mathematics, Engineering, related field, or equivalent experience Strong communication, interpersonal and problem-solving skills Good hands-on working experience in Python or Java Experience building RESTful Web Services using Fast API, Django or Flask. Good Understanding and hands on experience with SQL/NoSQL Databases Good understanding of the importance of testing in software development and the usage of unit testing framework like pytest/unittest. Hands on cloud technologies – Google or Azure preferred and experience in developing and managing microservices on cloud. Experience with Source code control systems, especially Git. Preferred: Hands on experience with data engineering technologies like Azure Databricks, Spark, or similar framework Some DevOps experience, knowledge of security best practices. Exposure to use of AI, LLM to solve business problems is added advantage. What we offer you Transparent compensation schemes and comprehensive employee benefits, tailored to your location, ensuring your financial security, health, and overall wellbeing. Flexible working arrangements, advanced technology, and collaborative workspaces. A culture of high performance and innovation where we experiment with new ideas and take responsibility for achieving results. A global network of talented colleagues, who inspire, support, and share their expertise to innovate and deliver for our clients. Global Orientation program to kickstart your journey, followed by access to our Learning@MSCI platform, LinkedIn Learning Pro and tailored learning opportunities for ongoing skills development. Multi-directional career paths that offer professional growth and development through new challenges, internal mobility and expanded roles. We actively nurture an environment that builds a sense of inclusion belonging and connection, including eight Employee Resource Groups. All Abilities, Asian Support Network, Black Leadership Network, Climate Action Network, Hola! MSCI, Pride & Allies, Women in Tech, and Women’s Leadership Forum. At MSCI we are passionate about what we do, and we are inspired by our purpose – to power better investment decisions. You’ll be part of an industry-leading network of creative, curious, and entrepreneurial pioneers. This is a space where you can challenge yourself, set new standards and perform beyond expectations for yourself, our clients, and our industry. MSCI is a leading provider of critical decision support tools and services for the global investment community. With over 50 years of expertise in research, data, and technology, we power better investment decisions by enabling clients to understand and analyze key drivers of risk and return and confidently build more effective portfolios. We create industry-leading research-enhanced solutions that clients use to gain insight into and improve transparency across the investment process. MSCI Inc. is an equal opportunity employer committed to diversifying its workforce. It is the policy of the firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, gender, gender identity, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy (including unlawful discrimination on the basis of a legally protected parental leave), veteran status, or any other characteristic protected by law. MSCI is also committed to working with and providing reasonable accommodations to individuals with disabilities. If you are an individual with a disability and would like to request a reasonable accommodation for any part of the application process, please email Disability.Assistance@msci.com and indicate the specifics of the assistance needed. Please note, this e-mail is intended only for individuals who are requesting a reasonable workplace accommodation; it is not intended for other inquiries. To all recruitment agencies MSCI does not accept unsolicited CVs/Resumes. Please do not forward CVs/Resumes to any MSCI employee, location, or website. MSCI is not responsible for any fees related to unsolicited CVs/Resumes. Note on recruitment scams We are aware of recruitment scams where fraudsters impersonating MSCI personnel may try and elicit personal information from job seekers. Read our full note on careers.msci.com
Posted 1 month ago
4.0 - 6.0 years
16 - 25 Lacs
Hyderabad
Remote
Experience Required: 4 to 6Years Mandate Mode of work: Remote Skills Required: Azure Data Factory, SQL, Databricks, Python/Scala Notice Period : Immediate Joiners/ Permanent(Can join within July 4th 2025 ) 4 to 6 years of experience with Big Data technologies Experience with Microsoft Azure cloud platform. Experience in SQL and experience with SQL-based database systems. Hands-on experience with Azure data services, such as Azure SQL Database, Azure Data Lake, and Azure Blob Storage . Experience with data integration and ETL (Extract, Transform, Load) processes . Experience with programming languages such as Python Relevant certifications in Azure data services or data engineering are a plus. Interested candidate can share your resume to OR you can refer your friend to Pavithra.tr@enabledata.com for the quick response.
Posted 1 month ago
3.0 - 5.0 years
8 - 15 Lacs
Hyderabad
Work from Office
Understanding the requirements and developing ADF pipelines Good knowledge of data bricks Strong understanding of the existing ADF pipelines and enhancements Deployment and Monitoring ADF Jobs Good understanding of SQL concepts and Strong in SQL query writing Understanding and writing the stored procedures Performance Tuning Roles and Responsibilities Understand business and data integration requirements. Design, develop, and implement scalable and reusable ADF pipelines for ETL/ELT processes. Leverage Databricks for advanced data transformations within ADF pipelines. Collaborate with data engineers to integrate ADF with Azure Databricks notebooks for big data processing. Analyze and understand existing ADF workflows. Implement improvements, optimize data flows, and incorporate new features based on evolving requirements. Manage deployment of ADF solutions across development, staging, and production environments. Set up monitoring, logging, and alerts to ensure smooth pipeline executions and troubleshoot failures. Write efficient and complex SQL queries to support data analysis and ETL tasks. Tune SQL queries for performance, especially in large-volume data scenarios. Design, develop, and maintain stored procedures for data transformation and business logic. Ensure procedures are optimized and modular for reusability and performance. Identify performance bottlenecks in queries and data processing routines. Apply indexing strategies, query refactoring, and execution plan analysis to enhance performance
Posted 1 month ago
5.0 - 10.0 years
16 - 27 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
6-7 Years of Data and Analytics experience with minimum 3 years in Azure Cloud Excellent communication and interpersonal skills. Extensive experience in Azure stack ADLS, Azure SQL DB, Azure Data Factory, Azure Data bricks, Azure Synapse, CosmoDB, Analysis Services, Event Hub etc.. Experience in job scheduling using Oozie or Airflow or any other ETL scheduler Design and build production data pipelines from ingestion to consumption within a big data architecture, using Java, Python, Scala. Good experience in designing & delivering data analytics solutions using Azure Cloud native services. Good experience in Requirements Analysis and Solution Architecture Design, Data modelling, ETL, data integration and data migration design Documentation of solutions (e.g. data models, configurations, and setup). Well versed with Waterfall, Agile, Scrum and similar project delivery methodologies. Experienced in internal as well as external stakeholder management Experience in MDM / DQM / Data Governance technologies like Collibra, Atacama, Alation, Reltio will be added advantage. Azure Data Engineer or Azure Solution Architect certification will be added advantage. Nice to have skills: Working experience with Snowflake, Databricks, Open source stack like Hadoop Bigdata, Pyspark, Scala, Python, Hive etc.
Posted 1 month ago
5.0 - 10.0 years
7 - 12 Lacs
Mumbai
Work from Office
This role requires deep understanding of data warehousing, business intelligence (BI), and data governance principles, with strong focus on the Microsoft technology stack. Data Architecture: Develop and maintain the overall data architecture, including data models, data flows, data quality standards. Design and implement data warehouses, data marts, data lakes on Microsoft Azure platform Business Intelligence: Design and develop complex BI reports, dashboards, and scorecards using Microsoft Power BI. Data Engineering: Work with data engineers to implement ETL/ELT pipelines using Azure Data Factory. Data Governance: Establish and enforce data governance policies and standards. Primary Skills Experience: 15+ years of relevant experience in data warehousing, BI, and data governance. Proven track record of delivering successful data solutions on the Microsoft stack. Experience working with diverse teams and stakeholders. Required Skills and Experience: Technical Skills: Strong proficiency in data warehousing concepts and methodologies. Expertise in Microsoft Power BI. Experience with Azure Data Factory, Azure Synapse Analytics, and Azure Databricks. Knowledge of SQL and scripting languages (Python, PowerShell). Strong understanding of data modeling and ETL/ELT processes. Secondary Skills Soft Skills Excellent communication and interpersonal skills. Strong analytical and problem-solving abilities. Ability to work independently and as part of a team. Strong attention to detail and organizational skills.
Posted 1 month ago
2.0 - 5.0 years
7 - 11 Lacs
Mumbai, Nagpur, Thane
Work from Office
Description IT Data Analyst Syneos Healthis a leading fully integrated biopharmaceutical solutions organization built to accelerate customer success We translate unique clinical, medical affairs and commercial insights into outcomes to address modern market realities, Our Clinical Development model brings the customer and the patient to the center of everything that we do We are continuously looking for ways to simplify and streamline our work to not only make Syneos Health easier to work with, but to make us easier to work for, Whether you join us in a Functional Service Provider partnership or a Full-Service environment, youll collaborate with passionate problem solvers, innovating as a team to help our customers achieve their goals We are agile and driven to accelerate the delivery of therapies, because we are passionate to change lives, Discover what our 29,000 employees, across 110 countries already know: WORK HERE MATTERS EVERYWHERE Why Syneos Health We are passionate about developing our people, through career development and progression; supportive and engaged line management; technical and therapeutic area training; peer recognition and total rewards program, We are committed to our Total Self culture where you can authentically be yourself Our Total Self culture is what unites us globally, and we are dedicated to taking care of our people, We are continuously building the company we all want to work for and our customers want to work with WhyBecause when we bring together diversity of thoughts, backgrounds, cultures, and perspectives were able to create a place where everyone feels like they belong, Job Responsibilities Primary Roles and Responsibilities: Write and understand complex SQL queries, including joins, CTE, to extract and manipulate the data, Manage and monitor the data pipelines in Azure Data Factory and Databricks, Manage to understand the integration with Veeva CRM and Salesforce Ensure data quality checks and business level checks in data pipeline of ADF and Databricks, Monitor and trouble shoot data and pipeline related issues, perform root cause analysis and implement corrective actions, Monitoring of system performance and reliability, troubleshooting issues and ensuring data delivery before committed SLA, Strong Understanding of Azure DevOps and Release process (Dev to Ops Handover) Manage to work in different EST shifts (11:30 am IST 8:30 pm IST and 4:30 pm IST to 2:30 am IST) Good communication skills and the ability to work effectively in a team environment, Collaborate with business stakeholders and other technical member to provide operational activities related services, Strong Documentation Skills Preferred Qualifications Full-Time bachelors degree in engineering/technology, computer science, IT or related fields, 2-3 years of experience in data engineering or ETL fields, Handson experience in Root cause analysis and trouble shooting, Must have Azure DP-203 Certified Databricks Related certifications would be preferable, Strong Collaboration, teamwork skills, excellent written and verbal communication skills, Technologies Primary Technologies: Azure Data Factory, Azure Databricks, Azure Data Lake Gen2, Azure Dedicated SQL Pools, SQL, Python/PySpark Secondary Technologies: Unity Catalog, Lakehouse Architecture, Good to have: Salesforce, JIRA, Azure Logic Apps, Azure Key Vault, Azure Automation Get to know Syneos Health Over the past 5 years, we have worked with 94% of all Novel FDA Approved Drugs, 95% of EMA Authorized Products and over 200 Studies across 73,000 Sites and 675,000+ Trial patients, No matter what your role is, youll take the initiative and challenge the status quo with us in a highly competitive and ever-changing environment Learn more about Syneos Health, http://syneoshealth, Additional Information Tasks, duties, and responsibilities as listed in this job description are not exhaustive The Company, at its sole discretion and with no prior notice, may assign other tasks, duties, and job responsibilities Equivalent experience, skills, and/or education will also be considered so qualifications of incumbents may differ from those listed in the Job Description The Company, at its sole discretion, will determine what constitutes as equivalent to the qualifications described above Further, nothing contained herein should be construed to create an employment contract Occasionally, required skills/experiences for jobs are expressed in brief terms Any language contained herein is intended to fully comply with all obligations imposed by the legislation of each country in which it operates, including the implementation of the EU Equality Directive, in relation to the recruitment and employment of its employees The Company is committed to compliance with the Americans with Disabilities Act, including the provision of reasonable accommodations, when appropriate, to assist employees or applicants to perform the essential functions of the job,
Posted 1 month ago
4.0 - 8.0 years
30 - 37 Lacs
Bengaluru
Work from Office
ECMS ID/ Title 525632 Number of Openings 1 Duration of contract 6 No of years experience Relevant 4-8 Years. Detailed job description - Skill Set: Attached Mandatory Skills* Azure Data Factory, PySpark notebooks, Spark SQL, and Python. Good to Have Skills ETL Processes, SQL, Azure Data Factory, Data Lake, Azure Synapse, Azure SQL, Databricks etc. Vendor Billing range 9000- 10000/Day Remote option available: Yes/ No Hybrid Mode Work location: Most Preferrable Pune and Hyderabad Start date: Immediate Client Interview / F2F Applicable yes Background check process to be followed: Before onboarding / After onboarding: BGV Agency: Post
Posted 1 month ago
3.0 - 6.0 years
14 - 18 Lacs
Bengaluru
Work from Office
Estabish and impement best practices for DBT workfows, ensuring efficiency, reiabiity, and maintainabiity. Coaborate with data anaysts, engineers, and business teams to aign data transformations with business needs. Monitor and troubeshoot data pipeines to ensure accuracy and performance. Work with Azure-based coud technoogies to support data storage, transformation, and processing Required education Bacheor's Degree Preferred education Master's Degree Required technica and professiona expertise Strong MS SQL, Azure Databricks experience Impement and manage data modes in DBT, data transformation and aignment with business requirements. Ingest raw, unstructured data into structured datasets to coud object store. Utiize DBT to convert raw, unstructured data into structured datasets, enabing efficient anaysis and reporting. Write and optimize SQL queries within DBT to enhance data transformation processes and improve overa performance Preferred technica and professiona experience Estabish best DBT processes to improve performance, scaabiity, and reiabiity. Design, deveop, and maintain scaabe data modes and transformations using DBT in conjunction with Databricks Proven interpersona skis whie contributing to team effort by accompishing reated resuts as required
Posted 1 month ago
1.0 - 2.0 years
4 - 9 Lacs
Kolkata
Work from Office
Job Title: Data Engineer Experience: 12 to 20 months Work Mode: Work from Office Locations: Kolkata Please Note : This opportunity is open only for candidates currently residing in Kolkata. About Tredence Tredence focuses on last-mile delivery of powerful insights into profitable actions by uniting its strengths in business analytics, data science, and software engineering. The largest companies across industries are engaging with us and deploying their prediction and optimization solutions at scale. Headquartered in the San Francisco Bay Area, we serve clients in the US, Canada, Europe, and Southeast Asia. Tredence is an equal opportunity employer. We celebrate and support diversity and are committed to creating an inclusive environment for all employees. Visit our website for more details: https://www.tredence.com Role Overview We are seeking a driven and hands-on Data Engineer with 12 to 20 months of experience to support modern data pipeline development and transformation initiatives. The role requires solid technical skills in SQL , Python , and PySpark , with exposure to cloud platforms such as Azure or GCP . As a Data Engineer at Tredence, you will work on ingesting, processing, and modeling large-scale data, implementing scalable data pipelines, and applying foundational data warehousing principles. This role also includes direct collaboration with cross-functional teams and client stakeholders. Key Responsibilities Develop robust and scalable data pipelines using PySpark in cloud platforms like Azure Databricks or GCP Dataflow . Write optimized SQL queries for data transformation, analysis, and validation. Implement and support data warehouse models and principles, including: Fact and Dimension modeling Star and Snowflake schemas Slowly Changing Dimensions (SCD) Change Data Capture (CDC) Medallion Architecture Monitor, troubleshoot, and improve pipeline performance and data quality. Work with teams across analytics, business, and IT functions to deliver data-driven solutions. Communicate technical updates and contribute to sprint-level delivery. Mandatory Skills Strong hands-on experience with SQL and Python Working knowledge of PySpark for data transformation Exposure to at least one cloud platform: Azure or GCP . Good understanding of data engineering and warehousing fundamentals Excellent debugging and problem-solving skills Strong written and verbal communication skills Preferred Skills Experience working with Databricks Community Edition or enterprise version Familiarity with data orchestration tools like Airflow or Azure Data Factory Exposure to CI/CD processes and version control (e.g., Git) Understanding of Agile/Scrum methodology and collaborative development Basic knowledge of handling structured and semi-structured data (JSON, Parquet,etc.)
Posted 1 month ago
2.0 - 3.0 years
12 - 16 Lacs
Pune
Work from Office
We are looking for a Power Platform Developer responsible for designing, developing, and deploying low-code/no-code solutions using Power Apps, Power Automate, and Power BI to enable data-driven decision-making across the organization You will collaborate with cross-functional teams to build intuitive dashboards, automate workflows, and manage data pipelines using tools like Alteryx and Azure Data Factory, Join us now! Your tasks Collaborating with cross-functional teams to define, design, build, and deploy new features Developing, deploying, and maintaining Power Platform solutions, including canvas apps, model-driven apps, and flows Designing, developing, and maintaining Power BI dashboards by applying data visualization and design best practices, analyzing and transforming data, and developing data pipelines Building and maintaining/supporting advanced ETL pipelines on Alteryx or any Azure Big Data cloud technologies to ingest and integrate data from multiple sources, making it ready for analytics and data science business needs, Troubleshooting and resolving issues related to Power Platform applications Ensuring the security, scalability, and reliability of Power Platform solutions Automating tasks via Power Automate and communicating with stakeholders to translate business needs into data-driven insights and product vision and strategy Requirements Experience in analyzing data and building interactive dashboards using Power BI and other BI tools Skilled in developing canvas and model-driven apps using Power Apps and automating workflows with Power Automate Hands-on experience with Alteryx, Azure Data Factory, and reshaping data for reporting and analysis Strong SQL skills with experience in Postgres, Oracle, MS SQL Server, and working across various database systems Familiar with Azure Databricks, Data Lake Storage, and other Azure-based data warehousing and processing tools Experience deploying solutions via pipelines and working in Agile/Scrum teams for iterative delivery Proven ability to work with structured/unstructured data, integrate across systems, and automate tasks using Azure tools Nice to have Microsoft Power Platform & Power BI certifications Job no 250602-NT4IQ Benefits For You Diverse portfolio of clients Wide portfolio of technologies Employment stability Remote work opportunities Contracts with the biggest brands Great Place to Work Europe Many experts you can learn from Open and accessible management team
Posted 1 month ago
7.0 - 12.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Conduct technical analyses of existing data pipelines, ETL processes, and on-premises/cloud system, identify technical bottlenecks, evaluate migration complexities, and propose optimizations. Desired Skills and experience Candidates should have a B.E./B.Tech/MCA/MBA in Finance, Information Systems, Computer Science or a related field 7+ years of experience as a Data and Cloud architecture with client stakeholders Strong experience in Synapse Analytics, Databricks, ADF, Azure SQL (DW/DB), SSIS. Strong experience in Advanced PS, Batch Scripting, C# (.NET 3.0). Expertise on Orchestration systems with ActiveBatch and AZ orchestration tools. Strong understanding of data warehousing, DLs, and Lakehouse concepts. Excellent communication skills, both written and verbal Extremely strong organizational and analytical skills with strong attention to detail Strong track record of excellent results delivered to internal and external clients Able to work independently without the needs for close supervision and collaboratively as part of cross-team efforts Experience with delivering projects within an agile environment Experience in project management and team management Key responsibilities include: Understand and review PowerShell (PS), SSIS, Batch Scripts, and C# (.NET 3.0) codebases for data processes. Assess the complexity of trigger migration across Active Batch (AB), Synapse, ADF, and Azure Databricks (ADB). Define usage of Azure SQL DW, SQL DB, and Data Lake (DL) for various workloads, proposing transitions where beneficial. Analyze data patterns for optimization, including direct raw-to-consumption loading and zone elimination (e.g., stg/app zones). Understand requirements for external tables (Lakehouse) Lead project deliverables, ensuring actionable and strategic outputs. Evaluate and ensure quality of deliverables within project timelines Develop a strong understanding of equity market domain knowledge Collaborate with domain experts and business stakeholders to understand business rules/logics Ensure effective, efficient, and continuous communication (written and verbally) with global stakeholders Independently troubleshoot difficult and complex issues on dev, test, UAT and production environments Responsible for end-to-end delivery of projects, coordination between client and internal offshore teams and manage client queries Demonstrate high attention to detail, should work in a dynamic environment whilst maintaining high quality standards, a natural aptitude to develop good internal working relationships and a flexible work ethic Responsible for Quality Checks and adhering to the agreed Service Level Agreement (SLA) / Turn Around Time (TAT)
Posted 1 month ago
8.0 - 10.0 years
11 - 15 Lacs
Gurugram
Work from Office
Design, construct, and maintain scalable data management systems using Azure Databricks, ensuring they meet end-user expectations. Supervise the upkeep of existing data infrastructure workflows to ensure continuous service delivery. Create data processing pipelines utilizing Databricks Notebooks, Spark SQL, Python and other Databricks tools. Oversee and lead the module through planning, estimation, implementation, monitoring and tracking. Desired Skills and Experience Over 8 + years of experience in data engineering, with expertise in Azure Data Bricks, MSSQL, LakeFlow, Python and supporting Azure technology. Design, build, test, and maintain highly scalable data management systems using Azure Databricks. Create data processing pipelines utilizing Databricks Notebooks, Spark SQL. Integrate Azure Databricks with other Azure services like Azure Data Lake Storage, Azure SQL Data Warehouse. Design and implement robust ETL pipelines using ADF and Databricks, ensuring data quality and integrity. Collaborate with data architects to implement effective data models and schemas within the Databricks environment. Develop and optimize PySpark/Python code for data processing tasks. Assist stakeholders with data-related technical issues and support their data infrastructure needs. Develop and maintain documentation for data pipeline architecture, development processes, and data governance. Data Warehousing: In-depth knowledge of data warehousing concepts, architecture, and implementation, including experience with various data warehouse platforms. Extremely strong organizational and analytical skills with strong attention to detail Strong track record of excellent results delivered to internal and external clients. Excellent problem-solving skills, with ability to work independently or as part of team. Strong communication and interpersonal skills, with ability to effectively engage with both technical and non-technical stakeholders. Able to work independently without the needs for close supervision and collaboratively as part of cross-team efforts. Key Responsibilities Interpret business requirements, either gathered or acquired. Work with internal resources as well as application vendors Designing, developing, and maintaining Data Bricks Solution and Relevant Data Quality rules Troubleshooting and resolving data related issues. Configuring and Creating Data models and Data Quality Rules to meet the needs of the customers. Hands on in handling Multiple Database platforms. Like Microsoft SQL Server, Oracle etc Reviewing and analyzing data from multiple internal and external sources Analyse existing PySpark/Python code and identify areas for optimization. Write new optimized SQL queries or Python Scripts to improve performance and reduce run time. Identify opportunities for efficiencies and innovative approaches to completing scope of work. Write clean, efficient, and well-documented code that adheres to best practices and Council IT coding standards. Maintenance and operation of existing custom codes processes Participate in team problem solving efforts and offer ideas to solve client issues. Query writing skills with ability to understand and implement changes to SQL functions and stored procedures. Effectively communicate with business and technology partners, peers and stakeholders Ability to deliver results under demanding timelines to real-world business problems. Ability to work independently and multi-task effectively. Configure system settings and options and execute unit/integration testing. Develop end-user Release Notes, training materials and deliver training to a broad user base. Identify and communicate areas for improvement Demonstrate high attention to detail, should work in a dynamic environment whilst maintaining high quality standards, a natural aptitude to develop good internal working relationships and a flexible work ethic. Responsible for Quality Checks and adhering to the agreed Service Level Agreement (SLA) / Turn Around Time (TAT)
Posted 1 month ago
9.0 - 14.0 years
35 - 55 Lacs
Noida
Hybrid
Looking For A Better Opportunity? Join Us and Make Things Happen with DMI a Encora company now....! Encora is seeking a full-time Lead Data Engineer with Logistic domian expertise to support our manufacturing large scale client in digital transformation. The Lead Data Engineer is responsible for ensuring the day-to-day leadership and guidance of the local, India-based, data team. This role will be the primary interface with the management team of the client and will work cross functionally with various IT functions to streamline project delivery. Minimum Requirements: l 8+ years of experience overall in IT l Current - 5+ years of experience on Azure Cloud as Data Engineer l Current - 3+ years of hands-on experience on Databricks/ AzureDatabricks l Proficient in Python/PySpark l Proficient in SQL/TSQL l Proficient in Data Warehousing Concepts (ETL/ELT, Data Vault Modelling, Dimensional Modelling, SCD, CDC) Primary Skills: Azure Cloud, Databricks, Azure Data Factory, Azure Synapse Analytics, SQL/TSQL, PySpark, Python + Logistic domain expertise Work Location: Noida, India (Candidates who are open for relocation on immediate basis can also apply) Interested candidates can apply at nidhi.dubey@encora.com along with their updated resume: 1. Total experience: 2.Relevant experience in Azure Cloud: 3. Relevant experience in Azure Databricks: 4. Relevant experience in Azure Syanspse: 5. Relevant experience in SQL/T-SQL: 6. Relevant experience in Pyspark: 7. Relevant experience in python: 8. Relevant experience in logistic domain: 9. Relevant experience in data warehosuing: 10. Current CTC: 11. Expected CTC: 12. Official Notice Period. if serving please specify LWD:
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39815 Jobs | Dublin
Wipro
19317 Jobs | Bengaluru
Accenture in India
15105 Jobs | Dublin 2
EY
14860 Jobs | London
Uplers
11139 Jobs | Ahmedabad
Amazon
10431 Jobs | Seattle,WA
IBM
9214 Jobs | Armonk
Oracle
9174 Jobs | Redwood City
Accenture services Pvt Ltd
7676 Jobs |
Capgemini
7672 Jobs | Paris,France