Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
16 - 26 years
40 - 60 Lacs
Ghaziabad
Work from Office
On a typical day, you might Develop and maintain the Data layer (DM or DWH) for the aggregation, validation, and presentation of data for reporting layer. Develop or coach the team members to develop reports, dashboards, and related database views/stored procedures. Evangelize self-service BI and visual discovery. Lead multiple projects and provide solutions Design & promote best BI practices by championing data quality, integrity, and reliability. Coach BI team members as a technology SME. Provide on-the-job training for new or less experienced team members. Partner with the sales and presales team to build BI-related solutions and proposals. Ideate an end-end architecture for a given problem statement. Interact and collaborate with multiple teams (Data Science, Consulting & Engineering) and various stakeholders to meet deadlines, to bring Analytical Solutions to life. What do we expect Skills that wed love! 15+ years of experience in delivering comprehensive BI solutions (Power BI, Tableau & Qlik) and 5+ years of experience in Power BI Real-time experience working in OLAP & OLTP database models (Dimensional models). Extensive knowledge and experience in Power BI capacity management, license recommendations, performance optimizations and end to end BI (Enterprise BI & Self-Service BI) Real-time experience in migration projects. Comprehensive knowledge of SQL & modern DW environments (Synapse, Snowflake, Redshift, BigQuery etc). Effortless in relational database concepts, flat file processing concepts, and software development lifecycle. Adept understanding of any of the cloud services is preferred (Azure, AWS & GCP). Enthuse to collaborate with various stakeholders across the organization and take complete ownership of deliverables. You are important to us, let's stay connected! Every individual comes with a different set of skills and qualities so even if you don't tick all the boxes for the role today, we urge you to apply as there might be a suitable/unique role for you tomorrow. We are an equal-opportunity employer. Our diverse and inclusive culture and values guide us to listen, trust, respect, and encourage people to grow the way they desire.
Posted 2 months ago
7 - 9 years
9 - 11 Lacs
Bengaluru
Work from Office
(US)-Data Engineer - AM - BLR - J48803 Responsibilities : Design and implement software systems with various Microsoft technologies and ensure compliance to all architecture requirements Define, design, develop and support the architecture of Tax products / Apps used across KPMG member firms by collaborating with technical and non-technical business stakeholders efficiently. Create and improve software products using design patterns, principles refactoring and development best practices. Focus on building Cloud Native Applications by revisiting the on-premises apps and co-ordinating on the Cloud Transformation journey. Provide excellent client facing service working with stakeholders across different geographical locations. Ability to work on multiple client activities in a very fast paced environment. Skills : Strong technical skills in below technologies Database: Azure SQL Cloud: Azure, Storage, IaaS, PaaS, Security, Networking, Azure Data Factory, ADLS, Synapse, Fabric. Coding : Python Strong SQL Programming & DevOps skills. Strong analytical skills, written and verbal communication skills are required. Working knowledge on software process such as source control management, change management, defect tracking and continuous integration Required Candidate profile Candidate Experience Should Be : 7 To 9 Candidate Degree Should Be : BE-Comp/IT,BE-Other,BTech-Comp/IT,BTech-Other,MBA,MCA
Posted 2 months ago
16 - 26 years
40 - 60 Lacs
Agra
Work from Office
As a Synapse - Principal Architect, you will work to solve some of the most complex and captivating data management problems that would enable them as a data-driven organization; Seamlessly switch between roles of an Individual Contributor, team member, and an Architect as demanded by each project to define, design, and deliver actionable insights. On a typical day, you might Collaborate with the clients to understand the overall requirements and create the robust, extensible architecture to meet the client/business requirements. Identify the right technical stack and tools that best meets the client requirements. Work with the client to define the scalable architecture. Design end to end solution along with data strategy, including standards, principles, data sources, storage, pipelines, data flow, and data security policies. Collaborate with data engineers, data scientists, and other stakeholders to execute the data strategy. Implement the Synapse best practices, data quality and data governance. Define the right data distribution/consumption pattern for downstream systems and consumers. Own end-to-end delivery of the project and design and develop reusable frameworks. Closely monitor the Project progress and provide regular updates to the leadership teams on the milestones, impediments etc. Support business proposals by providing solution approaches, detailed estimations, technical insights and best practices. Guidementor team members, and create the technical artifacts. Demonstrate thought leadership. Job Requirement A total of 16+ years of professional experience, including a minimum of 5 years of experience specifically in Architect roles focusing on Analytics solutions. Additionally, a minimum of 3+ years of experience working with Cloud platforms, demonstrating familiarity with Public Cloud architectures. Experience in implementing Modern Data Platforms/Data Warehousing solutions covering all major data solutioning aspects like Data integration, harmonization, standardization, modelling, governance, lineage, cataloguing, Data sharing and reporting Should have a decent understanding and working knowledge of the ADF, Logic Apps, Dedicated SQL pools, Serverless SQL pool & Spark Pools services of Azure Synapse Analytics focussed on optimization , workload management , availability, security, observability and cost management strategies Good hands-on experience writing procedures & scripts using T-SQL, Python. Good understanding of RDBMS systems, distributed computing on cloud with hands-on experience on Data Modelling. Experience in large scale migration from on-prem to Azure cloud. Good understanding of Microsoft Fabric Excellent understanding of Database and Datawarehouse concepts Experience in working with Azure DevOps Excellent communication & interpersonal skills
Posted 2 months ago
5 - 8 years
7 - 11 Lacs
Pune
Work from Office
5-8 years of experience in IT Industry 3+ years of experience with Azure Data Engineering Stack (Event Hub, Data Factory, Cosmos DB, Synapse, SQL DB, Databricks, Data Explorer) 3+ years of experience with Python / Pyspark, Spark, Scala, Hive, Impala Excellent knowledge of SQL and coding skills Good understanding of other Azure services like Azure Data Lake Analytics, U-SQL, Azure SQL DW Good Understanding of Modern Data Warehouse/Lambda Architecture, Data warehousing concepts Experience with scripting languages such as shell. Excellent analytical and organization skills. Effective working in a team as well as working independently. Experience of working in Agile delivery Knowledge of software development best practices. Strong written and verbal communication skills. Azure Data Engineer certification is added advantage Required Skills Azure, Data Engineering, Hub, Data Factory, Cosmos DB, Synapse, SQL DB, Databricks, Data Explorer, Python. Pyspark, Spark, Scala, Hive, Impala, SQL, Azure Data Lake Analytics, U-SQL, Azure SQL DW, Architecture, Software Development, Senior Data Engineer Azure
Posted 2 months ago
10 - 15 years
12 - 17 Lacs
Indore, Ahmedabad, Hyderabad
Work from Office
Job Title: Technical Architect / Solution Architect / Data Architect (Data Analytics) ?? Notice Period: Immediate to 15 Days ?? Experience: 9+ Years Job Summary: We are looking for a highly technical and experienced Data Architect / Solution Architect / Technical Architect with expertise in Data Analytics. The candidate should have strong hands-on experience in solutioning, architecture, and cloud technologies to drive data-driven decisions. Key Responsibilities: ? Design, develop, and implement end-to-end data architecture solutions. ? Provide technical leadership in Azure, Databricks, Snowflake, and Microsoft Fabric. ? Architect scalable, secure, and high-performing data solutions. ? Work on data strategy, governance, and optimization. ? Implement and optimize Power BI dashboards and SQL-based analytics. ? Collaborate with cross-functional teams to deliver robust data solutions. Primary Skills Required: ? Data Architecture & Solutioning ? Azure Cloud (Data Services, Storage, Synapse, etc.) ? Databricks & Snowflake (Data Engineering & Warehousing) ? Power BI (Visualization & Reporting) ? Microsoft Fabric (Data & AI Integration) ? SQL (Advanced Querying & Optimization) ?? Looking for immediate to 15-day joiners!
Posted 2 months ago
4 - 9 years
20 - 25 Lacs
Hyderabad
Work from Office
Highly skilled Senior Data engineer with over 5 years of experience in designing, developing, and implementing advanced business intelligence solutions on Microsoft Azure. The engineer should have hands-on expertise in ADF, Synapse and PowerBI and Azure DevOps platform. Key Responsibilities: Collaborate with stakeholders to plan, design, develop, test, and maintain the KPI data and dashboards on Azure and PowerBI. The candidate would have to have the following skills: Proficient in ETL processes, data modelling, and DAX query language on Microsoft Azure. Proven track record of collaborating with stakeholders to gather requirements and deliver actionable insights. Independently handle DevOps in ADF, Synapse ad PowerBI Proficient in business requirements gathering and analysis. Strong data analysis skills, including data interpretation and visualization. Familiarity with process modelling and documentation. Adept at creating interactive and visually compelling reports and dashboards to support data-driven decision-making in PowerBI. Excellent stakeholder management and communication skills. Knowledge of Agile methodologies and project management practices. Ability to develop and articulate clear and concise user stories and functional requirements. Proficiency in using data visualization tools like Power BI. Comfortable with conducting user acceptance testing (UAT) and quality assurance. Educational Qualification Graduate/Post Graduate degree Computer Science, Masters in Business Administration, Certification in PowerBI and Microsoft Azure Services Experience At least 6 years of experience in delivering large enterprise Analytics projects Overall experience of 08 - 10 years in Enterprise IT or Business The Industry to be hired from Chemical/Pharma/FMCG Manufacturing IT Big 4, IT/ITES Organizations- Analytics organizations Skills Proven experience of 6 + years as a data Engineer and Data Visualization developer Expertise and experience in ADF, Synapse and PowerBI Demonstrates an understanding of the IT environment, including enterprise applications like HRMS, ERPs, CRM, Manufacturing systems, API management, Webscrapingetc. Industry experience in Manufacturing Competencies (behavioural skills) required Excellent communication skills Analytical skills and strong organizational abilities Attention to detail Problem-solving aptitude Impact and influence across cross functional teams Leadership Skills and Experience Well organized, highly system and process-oriented approach Good business partner with customer orientation. Self-starter who can manage a range of competing priorities and projects. Ability to question status quo and bring in best in class practices. Ability to inspire and rally team around business objectives and excellence
Posted 2 months ago
10 - 20 years
30 - 45 Lacs
Bengaluru
Work from Office
Design and optimize data pipelines, ETL processes, and big data solutions using Azure Data Factory, Synapse, and Databricks. Develop and deploy ML models with TensorFlow/PyTorch, automate workflows, manage CI/CD for data, and ensure security Required Candidate profile 5-15 years in data engineering or ML. Expertise in Azure Data Services, Big Data (Spark, Kafka), ML frameworks (TensorFlow, PyTorch), Python/SQL, CI/CD, and data security best practices.
Posted 2 months ago
5 - 10 years
15 - 30 Lacs
Hyderabad
Work from Office
Lead Data Engineer Data Management Job description Company Overview Accordion works at the intersection of sponsors and management teams throughout every stage of the investment lifecycle, providing hands-on, execution-focused support to elevate data and analytics capabilities. So, what does it mean to work at Accordion? It means joining 1,000+ analytics, data science, finance & technology experts in a high-growth, agile, and entrepreneurial environment while transforming how portfolio companies drive value. It also means making your mark on Accordions futureby embracing a culture rooted in collaboration and a firm-wide commitment to building something great, together. Headquartered in New York City with 10 offices worldwide, Accordion invites you to join our journey. Data & Analytics (Accordion | Data & Analytics) Accordion's Data & Analytics (D&A) team delivers cutting-edge, intelligent solutions to a global clientele, leveraging a blend of domain knowledge, sophisticated technology tools, and deep analytics capabilities to tackle complex business challenges. We partner with Private Equity clients and their Portfolio Companies across diverse sectors, including Retail, CPG, Healthcare, Media & Entertainment, Technology, and Logistics. D&A team delivers data and analytical solutions designed to streamline reporting capabilities and enhance business insights across vast and complex data sets ranging from Sales, Operations, Marketing, Pricing, Customer Strategies, and more. Location: Hyderabad Role Overview: Accordion is looking for Lead Data Engineer. He/she will be responsible for the design, development, configuration/deployment, and maintenance of the above technology stack. He/she must have in-depth understanding of various tools & technologies in the above domain to design and implement robust and scalable solutions which address client current and future requirements at optimal costs. The Lead Data Engineer should be able to evaluate existing architectures and recommend way to upgrade and improve the performance of architectures both on-premises and cloud-based solutions. A successful Lead Data Engineer should possess strong working business knowledge, familiarity with multiple tools and techniques along with industry standards and best practices in Business Intelligence and Data Warehousing environment. He/she should have strong organizational, critical thinking, and communication skills. What You will do: Partners with clients to understand their business and create comprehensive business requirements. Develops end-to-end Business Intelligence framework based on requirements including recommending appropriate architecture (on-premises or cloud), analytics and reporting. Works closely with the business and technology teams to guide in solution development and implementation. Work closely with the business teams to arrive at methodologies to develop KPIs and Metrics. Work with Project Manager in developing and executing project plans within assigned schedule and timeline. Develop standard reports and functional dashboards based on business requirements. Conduct training programs and knowledge transfer sessions to junior developers when needed. Recommend improvements to provide optimum reporting solutions. Curiosity to learn new tools and technologies to provide futuristic solutions for clients. Ideally, you have: Undergraduate degree (B.E/B.Tech.) from tier-1/tier-2 colleges are preferred. More than 5 years of experience in related field. Proven expertise in SSIS, SSAS and SSRS (MSBI Suite.) In-depth knowledge of databases (SQL Server, MySQL, Oracle etc.) and data warehouse (any one of Azure Synapse, AWS Redshift, Google BigQuery, Snowflake etc.) In-depth knowledge of business intelligence tools (any one of Power BI, Tableau, Qlik, DOMO, Looker etc.) Good understanding of Azure (OR) AWS: Azure (Data Factory & Pipelines, SQL Database & Managed Instances, DevOps, Logic Apps, Analysis Services) or AWS (Glue, Aurora Database, Dynamo Database, Redshift, QuickSight). Proven abilities to take on initiative and be innovative. Analytical mind with problem solving attitude. Why Explore a Career at Accordion: High growth environment: Semi-annual performance management and promotion cycles coupled with a strong meritocratic culture, enables fast track to leadership responsibility. Cross Domain Exposure: Interesting and challenging work streams across industries and domains that always keep you excited, motivated, and on your toes. Entrepreneurial Environment : Intellectual freedom to make decisions and own them. We expect you to spread your wings and assume larger responsibilities. Fun culture and peer group: Non-bureaucratic and fun working environment; Strong peer environment that will challenge you and accelerate your learning curve. Other benefits for full time employees: Health and wellness programs that include employee health insurance covering immediate family members and parents, term life insurance for employees, free health camps for employees, discounted health services (including vision, dental) for employee and family members, free doctors consultations, counsellors, etc. Corporate Meal card options for ease of use and tax benefits. Team lunches, company sponsored team outings and celebrations. Cab reimbursement for women employees beyond a certain time of the day. Robust leave policy to support work-life balance. Specially designed leave structure to support woman employees for maternity and related requests. Reward and recognition platform to celebrate professional and personal milestones. A positive & transparent work environment including various employee engagement and employee benefit initiatives to support personal and professional learning and development.
Posted 2 months ago
4 - 9 years
10 - 20 Lacs
Bengaluru
Hybrid
Required Skills & Qualifications: • Good Communication skills and learning aptitude • Good understanding of Azure environment • Hands on Exp in Azure Data Lake, Azure Data Factory, Azure Synapse, Azure Databricks- • Must have hands on Apache Spark and Scala / Python programming, working with Delta Tables, Experience in Databricks is an added advantage. • Strong SQL Skills: Developing SQL Store Procedures, Functions, Dynamic SQL queries, Joins • Hands on experience in ingesting data from various data sources and data types & file types • Knowledge in Azure DevOps, understanding of build and release pipelines Good to have • Snowflake added advantage
Posted 2 months ago
5 - 10 years
22 - 37 Lacs
Pune, Hyderabad
Hybrid
Greetings from InfoVision...!!! We, InfoVision, looking forward to fulfill the position for Data Modeler wherein main skills and details are below. Please apply for this position, If you feel you are good enough. Role: Data Modeler Position: Full-Time Permanent role Work-Mode: Hybrid Model work (4 Days WFO) Job Locations: Pune and Hyderabad About InfoVision: Infovision, founded in 1995, is a leading global IT services company offering enterprise digital transformation and modernization solutions across business verticals. We partner with our clients in driving innovation, rethinking workflows, and transforming experiences so businesses can stay ahead in a rapidly changing world. We help shape a bold new area or era of technology led disruption accelerating digital with quality, agility and integrity. We have helped more than 75 global leaders across Telecom, Retail, Banking, Healthcare and Technology Industries deliver excellence for their customers. InfoVisions global presence enables us to offer offshore, near shore and onshore solutions for our customers. With our world-class infrastructure for employees and people-centric policies, InfoVision is one of the highest-rated digital services companies in Glassdoor ratings. We encourage our employees to thrive in and are committed to providing a work environment that fosters an entrepreneurial mindset, nurtures inclusivity, values integrity and accelerates your career by creating opportunities for promising growth. Job Summary: We are looking for a Data Modeler to design and optimize data models supporting automotive industry analytics and reporting. The ideal candidate will work with SAP ECC as a primary data source, leveraging Databricks and Azure Cloud to design scalable and efficient data architectures. This role involves developing logical and physical data models, ensuring data consistency, and collaborating with data engineers, business analysts, and domain experts to enable high-quality analytics solutions. Key Responsibilities: Data Modeling & Architecture: Design and maintain conceptual, logical, and physical data models for structured and unstructured data. SAP ECC Data Integration: Define data structures for extracting, transforming, and integrating SAP ECC data into Azure Databricks. Automotive Domain Modeling: Develop and optimize industry-specific data models covering customer, vehicle, material, and location data. Databricks & Delta Lake Optimization: Design efficient data models for Delta Lake storage and Databricks processing. Performance Tuning: Optimize data structures, indexing, and partitioning strategies for performance and scalability. Metadata & Data Governance: Implement data standards, data lineage tracking, and governance frameworks to maintain data integrity and compliance. Collaboration: Work closely with business stakeholders, data engineers, and data analysts to align models with business needs. Documentation: Create and maintain data dictionaries, entity-relationship diagrams (ERDs), and transformation logic documentation. Skills & Qualifications Data Modeling Expertise: Strong experience in dimensional modeling, 3NF, and hybrid modeling approaches. Automotive Industry Knowledge: Understanding of customer, vehicle, material, and dealership data models. SAP ECC Data Structures: Hands-on experience with SAP ECC tables, business objects, and extraction processes. Azure & Databricks Proficiency: Experience working with Azure Data Lake, Databricks, and Delta Lake for large-scale data processing. SQL & Database Management: Strong skills in SQL, T-SQL, or PL/SQL, with a focus on query optimization and indexing. ETL & Data Integration: Experience collaborating with data engineering teams on data transformation and ingestion processes. Data Governance & Quality: Understanding of data governance principles, lineage tracking, and master data management (MDM). Strong Documentation Skills: Ability to create ER diagrams, data dictionaries, and transformation rules. Preferred Qualifications Experience with data modeling tools such as Erwin, Lucidchart, or DBT. Knowledge of Databricks Unity Catalog and Azure Synapse Analytics. Familiarity with Kafka/Event Hub for real-time data streaming. Exposure to Power BI/Tableau for data visualization and reporting.
Posted 2 months ago
6 - 9 years
8 - 11 Lacs
Hyderabad
Work from Office
Overview As a member of the data engineering team, you will be the key technical expert developing and overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be an empowered member of a team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help lead the development of very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems. Responsibilities Be a founding member of the data engineering team. Help to attract talent to the team by networking with your peers, by representing PepsiCo HBS at conferences and other events, and by discussing our values and best practices when interviewing candidates. Own data pipeline development end-to-end, spanning data modeling, testing, scalability, operability and ongoing metrics. Ensure that we build high quality software by reviewing peer code check-ins. Define best practices for product development, engineering, and coding as part of a world class engineering team. Collaborate in architecture discussions and architectural decision making that is part of continually improving and expanding these platforms. Lead feature development in collaboration with other engineers; validate requirements / stories, assess current system capabilities, and decompose feature requirements into engineering tasks. Focus on delivering high quality data pipelines and tools through careful analysis of system capabilities and feature requests, peer reviews, test automation, and collaboration with other engineers. Develop software in short iterations to quickly add business value. Introduce new tools / practices to improve data and code quality; this includes researching / sourcing 3rd party tools and libraries, as well as developing tools in-house to improve workflow and quality for all data engineers. Support data pipelines developed by your teamthrough good exception handling, monitoring, and when needed by debugging production issues. Qualifications 6-9 years of overall technology experience that includes at least 5+ years of hands-on software development, data engineering, and systems architecture. 4+ years of experience in SQL optimization and performance tuning Experience with data modeling, data warehousing, and building high-volume ETL/ELT pipelines. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with data profiling and data quality tools like Apache Griffin, Deequ, or Great Expectations. Current skills in following technologies: Python Orchestration platforms: Airflow, Luigi, Databricks, or similar Relational databases: Postgres, MySQL, or equivalents MPP data systems: Snowflake, Redshift, Synapse, or similar Cloud platforms: AWS, Azure, or similar Version control (e.g., GitHub) and familiarity with deployment, CI/CD tools. Fluent with Agile processes and tools such as Jira or Pivotal Tracker Experience with running and scaling applications on the cloud infrastructure and containerized services like Kubernetes is a plus. Understanding of metadata management, data lineage, and data glossaries is a plus.
Posted 2 months ago
16 - 26 years
40 - 60 Lacs
Chennai, Bengaluru, Hyderabad
Work from Office
As a Synapse - Principal Architect, you will work to solve some of the most complex and captivating data management problems that would enable them as a data-driven organization; Seamlessly switch between roles of an Individual Contributor, team member, and an Architect as demanded by each project to define, design, and deliver actionable insights. On a typical day, you might Collaborate with the clients to understand the overall requirements and create the robust, extensible architecture to meet the client/business requirements. Identify the right technical stack and tools that best meets the client requirements. Work with the client to define the scalable architecture. Design end to end solution along with data strategy, including standards, principles, data sources, storage, pipelines, data flow, and data security policies. Collaborate with data engineers, data scientists, and other stakeholders to execute the data strategy. Implement the Synapse best practices, data quality and data governance. Define the right data distribution/consumption pattern for downstream systems and consumers. Own end-to-end delivery of the project and design and develop reusable frameworks. Closely monitor the Project progress and provide regular updates to the leadership teams on the milestones, impediments etc. Support business proposals by providing solution approaches, detailed estimations, technical insights and best practices. Guidementor team members, and create the technical artifacts. Demonstrate thought leadership. Job Requirement A total of 16+ years of professional experience, including a minimum of 5 years of experience specifically in Architect roles focusing on Analytics solutions. Additionally, a minimum of 3+ years of experience working with Cloud platforms, demonstrating familiarity with Public Cloud architectures. Experience in implementing Modern Data Platforms/Data Warehousing solutions covering all major data solutioning aspects like Data integration, harmonization, standardization, modelling, governance, lineage, cataloguing, Data sharing and reporting Should have a decent understanding and working knowledge of the ADF, Logic Apps, Dedicated SQL pools, Serverless SQL pool & Spark Pools services of Azure Synapse Analytics focussed on optimization , workload management , availability, security, observability and cost management strategies Good hands-on experience writing procedures & scripts using T-SQL, Python. Good understanding of RDBMS systems, distributed computing on cloud with hands-on experience on Data Modelling. Experience in large scale migration from on-prem to Azure cloud. Good understanding of Microsoft Fabric Excellent understanding of Database and Datawarehouse concepts Experience in working with Azure DevOps Excellent communication & interpersonal skills
Posted 2 months ago
5 - 10 years
10 - 15 Lacs
Hyderabad
Work from Office
Role Microsoft Azura Data Lake ETL Developer Experience: 5 +years Location : Hyderabad Role & Responsibilities • Design and develop ETL processes to integrate multiple data sources into Azure Data Lake • Optimize ETL processes for performance and scalability. • Build Data Warehouse and Data Marts for reporting • Implement data quality checks and governance standards • Work with business users to translate business requirements into technical solutions. • Collaborate with Infra team for automation and monitoring. Preferred candidate profile • Proficiency in Azure Data Lake, Synapse, Azure Data Fabric, SQL and Python . • Experience with data warehousing and cloud data solutions . • Bachelors degree in computer science or equivalent. • 5+ Years Experience
Posted 2 months ago
3 - 8 years
11 - 12 Lacs
Gurgaon
Hybrid
Position: Azure Data Engineer Company: US MNC Location: Gurgaon Experience: ~5 years Shift: 12:00 PM to 9:00 PM Cabs Yes, 2-Way Skills Needed Data Analysis, Azure, Data Lake, Data Factory, Data Bricks, Synapse, Azure SQL Summary The Azure Senior Data Engineer will be responsible for designing, building and maintaining efficient ELT/ETL pipelines using Data Factory along with data movement using the relevant Azure services. The person will be working in close co-ordination with the Tech Lead and/or the Architect to understand the requirements, effectively implement the solutions and to ensure the best practices are followed. Key Tasks Integrating data from various sources into a unified Azure data warehouse and suitable data marts. Continuously monitoring and testing the availability, performance and quality of data pipelines. Collaborating with peers by employing and following SDLC best practice while maintaining code repositories and activity using Agile, DevOps and CI/CD methodologies through Dev, Test and QA environments Working closely with stakeholders to understand ongoing requirements, build effective products and align data modelling principles. Adhering to agreed Release and Change Management Processes. Troubleshoot and investigate anomalies and bugs in code or data. Adhering to test and reconciliation standards to produce confidence in delivery. Produce appropriate and comprehensive documentation to support ease of access for technical and non-technical users. Engage in a culture of continual process improvement and best practice. Efficiently respond to changing business priorities through effective time management. Conduct all activities and duties in line with company policy and compliantly. To carry out any other ad-hoc duties as requested by management. Required Close to 5 years experience in Data Analytics. Working knowledge of Azure data analytics ecosystem. Must have hands-on experience on Microsoft SQL Server, Azure Data Factory, Azure SQL, Azure Synapse Computer graduate Currently in Delhi/NCR Should be available to join in 0-30 days For more details or applying, connect with Mariyam at 7302214372 / mariyam@manningconsulting.in
Posted 2 months ago
9 - 14 years
13 - 23 Lacs
Chennai, Pune, Bengaluru
Hybrid
Role & responsibilities We are urgently hiring for Azure Architect Role Azure , Databricks , Synapse Kindly share your resume on Ravina.m@vhrsol.com
Posted 3 months ago
5 - 10 years
16 - 30 Lacs
Bengaluru
Hybrid
CBS -National IT - Senior Associate -.Net Full Stack Bangalore Job Duties Be part of technical team in developing and maintaining Web and desktop applications and support issues and ensure an overlap of time zones for supporting Analytics and Web applications. Upgrade Application development software frameworks, support business administration activities, and implement BDO USA security policy, processes, and technologies. Demonstrate proficiency in Agile software development and delivery with a focus on automation. Show expertise in Web Application Development and Service-Oriented Application Design. Possess proven experience as a Full Stack Developer or similar role, with experience developing desktop, web, and mobile applications. Work on highly distributed and scalable system architecture. Design, code, debug, test, and develop features with good quality, maintainability and performance and security aspects considered. Work with a focus on customers requirements, considering current and future needs when designing and implementing features. Manage the site design and development life cycle, including budgeting and milestone management. Carries out routine systems testing to detect and resolve bugs, coding errors, and technical issues. Have knowledge of multiple front-end languages and libraries (e.g., HTML/CSS, JavaScript, XML, jQuery) and back-end languages (e.g., .NET Core, Entity framework, ASP.NET C#, Python, R) and JavaScript frameworks (e.g., Angular, React, Node.js). Be familiar with databases (e.g., MSSQL, MySQL, MongoDB), Azure Services, and UI/UX design. Maintain familiarity with Microsoft Development Best Practices, Azure ML, Databricks, Synapse, and Fabric. Exhibit excellent communication and teamwork skills, great attention to detail, and proven organizational skills. Qualifications, Knowledge, Skills and Abilities Education: A bachelors or masters degree in computer science, computer/electrical engineering or equivalent. Experience: Minimum 5-10 years of hands-on experience in software development. Software: Microsoft .Net technology is primary. Experience on multiple front-end languages and libraries (e.g., HTML/CSS, JavaScript, XML, jQuery) and back-end languages (e.g., .NET Core, Entity framework, ASP.NET C#, Python, R) and JavaScript frameworks (e.g., Angular, React, Node.js). Azure/AWS, SaaS/ PaaS/IaaS. SQL and NOSQL databases (MSSQL, MongoDB, PostgreSQL etc.) Distributed caching NCacheRedis, Memcached etc. Distributed message queue RabbitMQ/Kafka C#/Java /Ruby / Node.js / Python Other Knowledge, Skills & Abilities: Familiarity with Microsoft Development Best Practices, Azure ML, Databricks, Synapse, MS Blazor and Fabric.
Posted 3 months ago
5 - 7 years
22 - 25 Lacs
Chennai, Bengaluru, Hyderabad
Hybrid
Greetings from InfoVision...!!! We, InfoVision, looking forward to fill the position of Data Engineer with the main skill-set focus on Data Pipelines, Azure Databricks, Pyspark/Python, Azure DevOps, DWH, Azure Data Lake Storage Gen2. Company profile: Infovision, founded in 1995, is a leading global IT services company offering enterprise digital transformation and modernization solutions across business verticals. We partner with our clients in driving innovation, rethinking workflows, and transforming experiences so businesses can stay ahead in a rapidly changing world. We help shape a bold new area or era of technology led disruption accelerating digital with quality, agility and integrity. We have helped more than 75 global leaders across Telecom, Retail, Banking, Healthcare and Technology Industries deliver excellence for their customers. InfoVisions global presence enables us to offer offshore, near shore and onshore solutions for our customers. With our world-class infrastructure for employees and people-centric policies, InfoVision is one of the highest-rated digital services companies in Glassdoor ratings. We encourage our employees to thrive in and are committed to providing a work environment that fosters an entrepreneurial mindset, nurtures inclusivity, values integrity and accelerates your career by creating opportunities for promising growth. Designation: Data Engineer Experience Required: 5-7 Years Job Location: Hyderabad, Chennai, Coimbatore, Pune, Bangalore Opportunity is Fulltime and Hybrid model work As a Data Engineer in our team, you will be responsible for assessing complex new data sources and quickly turning these into business insights. You also will support the implementation and integration of these new data sources into our Azure Data platform. Responsibilities: You are detailed reviewing and analyzing structured, semi-structured and unstructured data sources for quality, completeness, and business value. You design, architect, implement and test rapid prototypes that demonstrate the value of the data and present them to diverse audiences. You participate in early state design and feature definition activities. Responsible for implementing robust data pipeline using Microsoft, Databricks Stack Responsible for creating reusable and scalable data pipelines. You are a Team-Player, collaborating with team members across multiple engineering teams to support the integration of proven prototypes into core intelligence products. You have strong communication skills to effectively convey complex data insights to non-technical stakeholders. You have experience working collaboratively in cross-functional teams and managing multiple projects simultaneously. Skills: Advanced working knowledge and experience with relational and non-relational databases. Experience building and optimizing Big Data pipelines , architectures, and datasets. Strong analytic skills related to working with structured and unstructured datasets. Hands-on experience in Azure Databricks utilizing Spark to develop ETL pipelines . Strong proficiency in data analysis, manipulation, and statistical modeling using tools like Spark, Python , Scala, SQL , or similar languages. Strong experience in Azure Data Lake Storage Gen2, Azure Data Factory , Databricks, Event Hub , Azure Synapse . Familiarity with several of the following technologies: Event Hub, Docker, Azure Kubernetes Service, Azure DWH , API Azure, Azure Function, Power BI , Azure Cognitive Services. Azure DevOps experience to deploy the data pipelines through CI/CD . Qualifications and Experience: Minimum 5-7 years of practical experience as Data Engineer. Bachelors degree in computer science, software engineering, information technology, or a related field. Azure cloud stack in-production experience. You can share your updated resume to the Email ID: Bojja.Chandu@Infovision.com along with below details. Full Name: Current Company: Payroll Company: Experience: Rel. Exp.: Current Location: Preferred Location: CTC: ECTC: Notice Period: Holding offers?: You can connect with me to my LinkedIn as well: https://www.linkedin.com/in/chandu-b-a48b2a142/ Regards, Chandu.B, InfoVision, Senior Executive - Talent Acquisition, Bojja.Chandu@Infovision.com
Posted 3 months ago
6 - 9 years
15 - 25 Lacs
Noida
Hybrid
Immediate Joiners Preferred Shift:- 12:00 PM to 9:00 PM Mode:- Hybrid Mode Position Summary Someone who is responsible for the Azure Big Data platform in collaboration with vendor partner and Big Data development team. Someone with hands on working experience in Azure Data Analytics using ADLS, Delta Lakes, Cosmos DB, Python and Spark/Scala programing to facilitate a strong, robust and flexible platform. Job Responsibilities Assist Big Data development & Data science team by writing basic Python scripts, Spark & PySpark etc. Assist Big Data development and Data science team in Data Ingestion projects in Azure environment Configure and manage I-a-a-S, P-a-s-S Experience of Azure cloud components: Azure Data Factory Azure Data Lake Azure Data Catalogue Azure LogicApps & FunctionApps Azure Synapse Analytics Azure Databricks Azure EventHub Azure Functions Azure SQL DB Knowledge, Skills and Abilities Education Bachelor's degree in Computer Science, Engineering, or related discipline Experience 5 to 8 years of solutions design & development experience Experience in building Data Ingestion/Transformation pipelines on Azure Cloud Experience in BigData Tools like Spark, Delta Lakes, ADLS, Azure Synapse /Databricks Proficient understanding of distributed computing principles Knowledge and skills (general and technical) Spark, Scala, Python Azure Cosmos DB, Mongo DB Azure Data Factory ADLS-Gen2 Azure Data Lake Azure Data Catalogue Azure LogicApps & FunctionApps Azure Synapse Analytics Azure Databricks Azure EventHub Azure Functions Azure SQL DB
Posted 3 months ago
8 - 12 years
32 - 37 Lacs
Hyderabad
Work from Office
Job Overview: As Senior Analyst, Data Modeling , your focus would be to partner with D&A Data Foundation team members to create data models for Global projects. This would include independently analyzing project data needs, identifying data storage and integration needs/issues, and driving opportunities for data model reuse, satisfying project requirements. Role will advocate Enterprise Architecture, Data Design, and D&A standards, and best practices. You will be performing all aspects of Data Modeling working closely with Data Governance, Data Engineering and Data Architects teams. As a member of the data modeling team, you will create data models for very large and complex data applications in public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics . The primary responsibilities of this role are to work with data product owners, data management owners, and data engineering teams to create physical and logical data models with an extensible philosophy to support future, unknown use cases with minimal rework. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems. You will establish data design patterns that will drive flexible, scalable, and efficient data models to maximize value and reuse. Responsibilities Complete conceptual, logical and physical data models for any supported platform, including SQL Data Warehouse, EMR, Spark, DataBricks, Snowflake, Azure Synapse or other Cloud data warehousing technologies. Governs data design/modeling documentation of metadata (business definitions of entities and attributes) and constructions database objects, for baseline and investment funded projects, as assigned. Provides and/or supports data analysis, requirements gathering, solution development, and design reviews for enhancements to, or new, applications/reporting. Supports assigned project contractors (both on- & off-shore), orienting new contractors to standards, best practices, and tools. Contributes to project cost estimates, working with senior members of team to evaluate the size and complexity of the changes or new development. Ensure physical and logical data models are designed with an extensible philosophy to support future, unknown use cases with minimal rework. Develop a deep understanding of the business domain and enterprise technology inventory to craft a solution roadmap that achieves business objectives, maximizes reuse. Partner with IT, data engineering and other teams to ensure the enterprise data model incorporates key dimensions needed for the proper management: business and financial policies, security, local-market regulatory rules, consumer privacy by design principles (PII management) and all linked across fundamental identity foundations. Drive collaborative reviews of design, code, data, security features implementation performed by data engineers to drive data product development. Assist with data planning, sourcing, collection, profiling, and transformation. Create Source To Target Mappings for ETL and BI developers. Show expertise for data at all levels: low-latency, relational, and unstructured data stores; analytical and data lakes; data streaming (consumption/production), data in-transit. Develop reusable data models based on cloud-centric, code-first approaches to data management and cleansing. Partner with the Data Governance team to standardize their classification of unstructured data into standard structures for data discovery and action by business customers and stakeholders. Support data lineage and mapping of source system data to canonical data stores for research, analysis and productization. Qualifications: 8+ years of overall technology experience that includes at least 4+ years of data modeling and systems architecture. 3+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 4+ years of experience developing enterprise data models. Experience in building solutions in the retail or in the supply chain space. Expertise in data modeling tools (ER/Studio, Erwin, IDM/ARDM models). Experience with integration of multi cloud services (Azure) with on-premises technologies. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operatinghighly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse, Teradata or SnowFlake. Experience with version control systems like Github and deployment & CI tools. Experience with Azure Data Factory, Databricks and Azure Machine learning is a plus. Experience of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI).
Posted 3 months ago
2 - 4 years
4 - 6 Lacs
Bengaluru
Work from Office
Job Description: Proven experience in assembling large, complex sets of data that meets non-functional and functional business requirements Good exposure in working with Azure data bricks Pyspark, Spark SQL, Scala (Lazy evaluation and delta tables, Parquet formats, Working with large and complex datasets) Experience/Knowledge in ETL, data pipelines, data flow techniques using Azure Data Services Leverage Databricks features such as Delta Lake, Unity Catalog, and advanced Spark configurations for efficient data management. Debug Spark jobs, analyze performance issues, and implement optimizations to Ensure pipelines meet SLAs for latency and throughput. Implement data partitioning, caching, and clustering strategies for performance tuning. Good understanding about SQL, Databases, NO-SQL DBs, Data Warehouse, Hadoop and various data storage options on the cloud. Develop and manage CI/CD pipelines for deploying Databricks notebooks and jobs using tools like Azure DevOps, Git, or Jenkins for version control and automation. Experience in development projects as Data Architect Must Need Skills- Data factory, Databricks, Databricks Architecture , Synapse, Py Spark Python and SparkAzure DB: Azure SQL Cosmos DB Integrate data validation frameworks like Great Expectations & Implement data quality checks to ensure reliability and consistency. Build and maintain a Lakehouse architecture in ADLS Databricks. Manage access controls and ensure compliance with data governance policies using Unity Catalog and Role-Based Access Control (RBAC). Experience integrating different data sources. Good Experiences in Snowflake, added advantage. Experience in supporting BI and Data Science teams in consuming the data in a secure and governed manner Create and maintain comprehensive documentation for data processes, procedures, and architecture designs
Posted 3 months ago
6 - 10 years
30 - 35 Lacs
Bengaluru
Work from Office
We are seeking an experienced PySpark Developer / Data Engineer to design, develop, and optimize big data processing pipelines using Apache Spark and Python (PySpark). The ideal candidate should have expertise in distributed computing, ETL workflows, data lake architectures, and cloud-based big data solutions. Key Responsibilities: Develop and optimize ETL/ELT data pipelines using PySpark on distributed computing platforms (Hadoop, Databricks, EMR, HDInsight). Work with structured and unstructured data to perform data transformation, cleansing, and aggregation. Implement data lake and data warehouse solutions on AWS (S3, Glue, Redshift), Azure (ADLS, Synapse), or GCP (BigQuery, Dataflow). Optimize PySpark jobs for performance tuning, partitioning, and caching strategies. Design and implement real-time and batch data processing solutions. Integrate data pipelines with Kafka, Delta Lake, Iceberg, or Hudi for streaming and incremental updates. Ensure data security, governance, and compliance with industry best practices. Work with data scientists and analysts to prepare and process large-scale datasets for machine learning models. Collaborate with DevOps teams to deploy, monitor, and scale PySpark jobs using CI/CD pipelines, Kubernetes, and containerization. Perform unit testing and validation to ensure data integrity and reliability. Required Skills & Qualifications: 6+ years of experience in big data processing, ETL, and data engineering. Strong hands-on experience with PySpark (Apache Spark with Python). Expertise in SQL, DataFrame API, and RDD transformations. Experience with big data platforms (Hadoop, Hive, HDFS, Spark SQL). Knowledge of cloud data processing services (AWS Glue, EMR, Databricks, Azure Synapse, GCP Dataflow). Proficiency in writing optimized queries, partitioning, and indexing for performance tuning. Experience with workflow orchestration tools like Airflow, Oozie, or Prefect. Familiarity with containerization and deployment using Docker, Kubernetes, and CI/CD pipelines. Strong understanding of data governance, security, and compliance (GDPR, HIPAA, CCPA, etc.). Excellent problem-solving, debugging, and performance optimization skills.
Posted 3 months ago
7 - 12 years
15 - 17 Lacs
Chennai, Kochi, Mumbai (All Areas)
Hybrid
Azure DF , Azure synapse,Azure SQL ,Date Warehouse and Web Services,Pyspark Overall 7-10 yrs Exp Min 5 yrs in Data Engineering Location - Chennai,Mumbai,Kochi,Coimbatore Mode-Hybrid **Immediate joiner only **
Posted 3 months ago
10 - 15 years
27 - 32 Lacs
Pune
Work from Office
Position: Project Manager-ETL EXP : 10-15Years Budget: 28LPA -32LPA Location : Pune Notice Period: Immediate to 15 Days 8409250974 Required Candidate profile Mandatory Skills: ETL (SSIS/Informatica/Talend/Abinitio) AND Azure (ADF/Synapse/Databricks) AND Migration project experience (1+ migrations to be done) Good to have: Fixed price project experience
Posted 3 months ago
8 - 13 years
20 - 27 Lacs
Bengaluru
Work from Office
Role & responsibilities Only immediate joinees please apply Only candidates residing in Bangalore please apply. American MNC , Profuct Company Location : Bangalore Wprk from office 5 days a week Essential Duties and Responsibilities: The key responsibilities are: 1. Interpret Business Needs a. Collaborate with team members, vendors, management, and various stakeholders to document and understand data requirements and objectives. b. Identify cross-system requirement dependencies and mitigate potential gap. 2. Data Transformation / Data Insights a. Determine best practice to pull and house data for reporting needs. Extensive mapping may be required. Standardize data from various sources if required. b. Develop and maintain Data lake objects as required using Microsoft technologies, including but not limited to Azure Data Factory, Synapse Workspace Power BI, Data Flows etc i. Create Design document to include data model, source to target mapping, transformation rules etc. ii. Create Data Models and data objects in data lake iii. Create Data flows / pipelines to load data in data lake from various sources. iv. Deploy across various environments using CI/CD. c. Ensure proper documentation of test cases and test results. Perform functional and integration testing. Review and approve test cases. d. Collaborate with development team to ensure no gaps in product implementation. 3. Actively Support Data Analytics and D365 Environment a. Create / maintain Azure data resources as needed. b. Monitor and tune resource utilization and processing cost. c. Be the first respondent to urgent support and enhancement requirements in Data Lake area. d. Work with team on any support tickets that require data engineering. 4. Continuous Learning and Growth: a. Continue to expand knowledge of the Microsoft technology stack. b. Stay updated on emerging tools and best practices. Required Knowledge, Skills, and Abilities: Experienced with MS SQL Server Experience in Data Engineer in the Microsoft Stack Proficiency ADF and Synapse development and administration Excellent skills in Excel Excellent knowledge of SQL Very strong Data Modeling skills Education and/or Experience: Must have a bachelors degree in Computer Science or equivalent. 8 to 10 years of SQL experience 8 to 10 years of ADF / Synapse experience Microsoft Certifications in Development a plus Familiar with Power BI, a plus Experience with Software Development Life Cycle. Perks and benefits Best in the industry
Posted 3 months ago
8 - 13 years
7 - 17 Lacs
Bangalore Rural
Remote
Overall Data and Analytics: 8 years Azure Data Factory: 5 years Microsoft Power BI: 6 years Databricks: 3 years Synapse: 4 years Azure DevOps: 2 years Financial Services Industry Data and Analytics: 2 years Experience in architecting and designing self-service Business IntelligenceSolutions Experience using Tabular Data Model Analysis in Power BI Experience creating dataflows to ingest, mash-up, model and build Power BI Reports Experience with Power BI Premium features for AI, Aggregations Dataflows, Direct Query etc. Job Description Roles and Responsibilities: Analyze current MS Excel based reports & build comparable equivalents using PowerBI Analyze current Power BI reports and identify improvement opportunities Coordinate technical design sessions Understand existing data structures in order to best determine how to consolidate and aggregate data in an efficient and scalable way Work closely with Integration Architects, Data Modelers, application teams, and vendors to develop optimal solutions Improve business process agility and outcomes, drive innovation, and reduce time to market for our business driven analytics
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2