Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 6.0 years
0 Lacs
pune, maharashtra
On-site
Adient is a leading global automotive seating supplier, supporting all major automakers in the differentiation of their vehicles through superior quality, technology, and performance. We are seeking a Sr. Data Analytics Lead to help build Adients data and analytics foundation, directly benefitting our internal business units, and our Consumers. You are self-motivated and data-curious, especially about how data can be used to optimize business opportunities. In this role you will own projects end-to-end, from conception to operationalization, demonstrating your comprehensive understanding of the full data product development lifecycle. You will employ various analytical techniques to solve complex problems, drive scalable cloud data architectures, and deliver data products to enhance decision making across the organization. In this role, you will also own the technical support for released applications being used by internal Adient teams. This includes the daily triage of problem tickets and change requests. You will have 2-3 developer direct reports to accommodate this support as well as new development. The successful candidate can lead medium to large scale analytics projects requiring minimal direction, is highly proficient in SQL and cloud-based technologies, has good communication skills, takes the initiative to explore and tackle problems, and is an effective people leader. The ideal candidate will be working within Adients Advanced Analytics team. You will be a part of an empowered, highly capable team collaborating with Business Relationship Managers, Product Owners, Data Engineers, Production Support, and Visualization Developers within multiple business units to understand the data analytics needs and translate those requirements into world-class solution architectures. You will lead and mentor a team of solution architects to research, analyze, implement, and support scalable data product solutions that power Adients analytics across the enterprise, and deliver on business priorities. Own technical support for released internal analytics applications. This includes the daily triage of problem tickets and change requests. Lead development and execution of reporting and analytics products to enable data-driven business decisions that will drive performance and lead to the accomplishment of annual goals. You will be leading, hiring, developing, and evolving the Analytics team and providing them technical direction with the support of other leads and architects. Understand the road ahead and ensure the team has the skills and tools necessary to succeed. Drive the team to develop operationally efficient analytic solutions. Manage resources/budget and partner with functional and business teams. Advocate sound software development practices and help develop and evangelize great engineering and organizational practices. You will be leading the team that designs and builds highly scalable data pipelines using new generation tools and technologies like Azure, Snowflake, Spark, Databricks, SQL, Python to induct data from various systems. Work with product owners to ensure priorities are understood and direct the team to support the vision of the larger Analytics organization. Translate complex business problem statements into analysis requirements and work with internal customers to define data product details based on expressed partner needs. Work closely with business and technical teams to deliver enterprise-grade datasets that are reliable, flexible, scalable, and provide low cost of ownership. Develop SQL queries and data visualizations to fulfill internal customer application reporting requirements, as well as ad-hoc analysis requests using tools such as PowerBI. Thoroughly document business requirements, data architecture solutions, and processes for business and technical audiences. Serve as a domain specialist on data and business processes within your area of focus and find solutions to operational or data issues in the data pipelines. Grow the technical ability of the team. QUALIFICATIONS - Bachelors Degree or Equivalent with 8+ years of experience in data engineering, computer science, or statistics field with at least 2+ years of experience in leadership/management. - Experience in developing Big Data cloud-based applications using the following technologies: SQL, Azure, Snowflake, PowerBI. - Experience building complex ADF data pipelines and Data Flows to ingest data from on-prem sources, transform, and sink into Snowflake. Good understanding of ADF pipelining Activities. - Familiar with various Azure connectors to establish on-prem data-source connectivity, as well as Snowflake data-warehouse connectivity over private network. - Lead/Work with hybrid teams, communicate effectively, both written and verbal, with technical and non-technical multi-functional teams. - Translate complex business requirements into scalable technical solutions meeting data warehousing design standards. Solid understanding of analytics needs and proactive-ness to build generic solutions to improve efficiency. - Experience with data visualization and dashboarding techniques to make complex data more accessible, understandable, and usable to drive business decisions and outcomes. Efficient in PowerBI. - Extensive experience in data architecture, defining and maintaining data assets, and developing data architecture strategies to support reporting and data visualization tools. - Understands common analytical data models like Kimball. Ensures physical data models align with best practice and requirements. - Thrives in a dynamic environment, keeping composure and a positive attitude. - A plus if your experience was in distribution or manufacturing organizations. PREFERRED - Experience with Snowflake cloud data warehouse. - Experience with Azure PaaS services. - Experience with TSQL, SQL Server, Azure SQL, Snowflake SQL, Oracle SQL. - Experience with Azure Storage account connectivity. - Experience developing visualizations with PowerBI and BusinessObjects. - Experience with Databricks. - Experience with ADLS Gen2. - Experience with Azure VNet private endpoints on a private network. - Proficient with Spark and Python. - Advanced proficiency in SQL, joining multiple data sets across different data grains, query optimization, pivoting data. - MS Azure Certifications. - Snowflake Certifications. - Experience with other leading commercial Cloud platforms like AWS. - Experience with installing and configuring ODBC, JDBC drivers on Windows. - Candidate resides in the Plymouth MI area. PRIMARY LOCATION Pune Tech Center.,
Posted 3 weeks ago
10.0 - 14.0 years
0 Lacs
dehradun, uttarakhand
On-site
You should have familiarity with modern storage formats like Parquet and ORC. Your responsibilities will include designing and developing conceptual, logical, and physical data models to support enterprise data initiatives. You will build, maintain, and optimize data models within Databricks Unity Catalog, developing efficient data structures using Delta Lake to optimize performance, scalability, and reusability. Collaboration with data engineers, architects, analysts, and stakeholders is essential to ensure data model alignment with ingestion pipelines and business goals. You will translate business and reporting requirements into a robust data architecture using best practices in data warehousing and Lakehouse design. Additionally, maintaining comprehensive metadata artifacts such as data dictionaries, data lineage, and modeling documentation is crucial. Enforcing and supporting data governance, data quality, and security protocols across data ecosystems will be part of your role. You will continuously evaluate and improve modeling processes. The ideal candidate will have 10+ years of hands-on experience in data modeling in Big Data environments. Expertise in OLTP, OLAP, dimensional modeling, and enterprise data warehouse practices is required. Proficiency in modeling methodologies including Kimball, Inmon, and Data Vault is expected. Hands-on experience with modeling tools like ER/Studio, ERwin, PowerDesigner, SQLDBM, dbt, or Lucidchart is preferred. Proven experience in Databricks with Unity Catalog and Delta Lake is necessary, along with a strong command of SQL and Apache Spark for querying and transformation. Experience with the Azure Data Platform, including Azure Data Factory, Azure Data Lake Storage, Azure Synapse Analytics, and Azure SQL Database is beneficial. Exposure to Azure Purview or similar data cataloging tools is a plus. Strong communication and documentation skills are required, with the ability to work in cross-functional agile environments. Qualifications for this role include a Bachelor's or Master's degree in Computer Science, Information Systems, Data Engineering, or a related field. Certifications such as Microsoft DP-203: Data Engineering on Microsoft Azure are desirable. Experience working in agile/scrum environments and exposure to enterprise data security and regulatory compliance frameworks (e.g., GDPR, HIPAA) are also advantageous.,
Posted 3 weeks ago
6.0 - 10.0 years
0 Lacs
jaipur, rajasthan
On-site
You are a Sr. Data Engineer with a strong background in building ELT pipelines and expertise in modern data engineering practices. You are experienced with Databricks and DBT, proficient in SQL and Python, and have a solid understanding of data warehousing methodologies such as Kimball or Data Vault. You are comfortable working with DevOps tools, particularly within AWS, Databricks, and GitLab. Your role involves collaborating with cross-functional teams to design, develop, and maintain scalable data infrastructure and pipelines using Databricks and DBT. Your responsibilities include designing, building, and maintaining scalable ELT pipelines for processing and transforming large datasets efficiently in Databricks. You will implement Kimball data warehousing methodologies or other multi-dimensional modeling approaches using DBT. Leveraging AWS, Databricks, and GitLab, you will implement CI/CD practices for data engineering workflows. Additionally, you will optimize SQL queries and database performance, monitor and fine-tune data pipelines and queries, and ensure compliance with data security, privacy, and governance standards. Key qualifications for this role include 6+ years of data engineering experience, hands-on experience with Databricks and DBT, proficiency in SQL and Python, experience with Kimball data warehousing or Data Vault methodologies, familiarity with DevOps tools and practices, strong problem-solving skills, and the ability to work in a fast-paced, agile environment. Preferred qualifications include experience with Apache Spark for large-scale data processing, familiarity with CI/CD pipelines for data engineering workflows, understanding of orchestration tools like Apache Airflow, and certifications in AWS, Databricks, or DBT. In return, you will receive benefits such as medical insurance for employees, spouse, and children, accidental life insurance, provident fund, paid vacation time, paid holidays, employee referral bonuses, reimbursement for high-speed internet at home, one-month free stay for employees moving from other cities, tax-free benefits, and other bonuses as determined by management.,
Posted 3 weeks ago
10.0 - 17.0 years
12 - 22 Lacs
Gurugram
Work from Office
We know the importance that food plays in people's lives the power it has to bring people, families and communities together. Our purpose is to bring enjoyment to people’s lives through great tasting food, in a way which reflects our values. McCain has recently committed to implementing regenerative agriculture practices across 100 percent of our potato acreage by 2030. Ask us more about our commitment to sustainability. OVERVIEW McCain is embarking on a digital transformation. As part of this transformation, we are making significant investments into our data platforms, common data models. data structures and data policies to increase the quality of our data, the confidence of our business teams to use this data to make better decisions and drive value through the use of data. We have a new and ambitious global Digital & Data group, which serves as a resource to the business teams in our regions and global functions. We are currently recruiting an experienced Data Architect to build enterprise data model McCain. JOB PURPOSE: Reporting to the Data Architect Lead, Global Data Architect will take a lead role in creating the enterprise data model for McCain Foods, bringing together data assets across agriculture, manufacturing, supply chain and commercial. This data model will be the foundation for our analytics program that seeks to bring together McCain’s industry-leading operational data sets, with 3rd party data sets, to drive world-class analytics. Working with a diverse team of data governance experts, data integration architects, data engineers and our analytics team including data scientist, you will play a key role in creating a conceptual, logical and physical data model that underpins the Global Digital & Data team’s activities. . JOB RESPONSIBILITIES: Develop an understanding of McCain’s key data assets and work with data governance team to document key data sets in our enterprise data catalog Work with business stakeholders to build a conceptual business model by understanding the business end to end process, challenges, and future business plans. Collaborate with application architects to bring in the analytics point of view when designing end user applications. Develop Logical data model based on business model and align with business teams Work with technical teams to build physical data model, data lineage and keep all relevant documentations Develop a process to manage to all models and appropriate controls With a use-case driven approach, enhance and expand enterprise data model based on legacy on-premises analytics products, and new cloud data products including advanced analytics models Design key enterprise conformed dimensions and ensure understanding across data engineering teams (including third parties); keep data catalog and wiki tools current Primary point of contact for new Digital and IT programs, to ensure alignment to enterprise data model Be a clear player in shaping McCain’s cloud migration strategy, enabling advanced analytics and world-leading Business Intelligence analytics Work in close collaboration with data engineers ensuring data modeling best practices are followed MEASURES OF SUCCESS: Demonstrated history of driving change in a large, global organization A true passion for well-structured and well-governed data; you know and can explain to others the real business risk of too many mapping tables You live for a well-designed and well-structured conformed dimension table Focus on use-case driven prioritization; you are comfortable pushing business teams for requirements that connect to business value and also able to challenge requirements that will not achieve the business’ goals Developing data models that are not just elegant, but truly optimized for analytics, both advanced analytics use cases and dashboarding / BI tools A coaching mindset wherever you go, including with the business, data engineers and other architects A infectious enthusiasm for learning: about our business, deepening your technical knowledge and meeting our teams Have a “get things done” attitude. Roll up the sleeves when necessary; work with and through others as needed KEY QUALIFICATION & EXPERIENCES: Data Design and Governance At least 5 years of experience with data modeling to support business process Ability to design complex data models to connect and internal and external data Nice to have: Ability profile the data for data quality requirements At least 8 years of experience with requirement analysis; experience working with business stakeholders on data design Experience on working with real-time data. Nice to have: experience with Data Catalog tools Ability to draft accurate documentation that supports the project management effort and coding Technical skills At least 5 years of experience designing and working in Data Warehouse solutions building data models; preference for having S4 hana knowledge. At least 2 years of experience in visualization tools preferably Power BI or similar tools.e At least 2 years designing and working in Cloud Data Warehouse solutions; preference for Azure Databricks, Azure Synapse or earlier Microsoft solutions Experience Visio, Power Designer, or similar data modeling tools Nice to have: Experience in data profiling tools informatica, Collibra or similar data quality tools Nice to have: Working experience on MDx Experience in working in Azure cloud environment or similar cloud environment Must have : Ability to develop queries in SQL for assessing , manipulating, and accessing data stored in relational databases , hands on experience in PySpark, Python Nice to have: Ability to understand and work with unstructured data Nice to have at least 1 successful enterprise-wide cloud migration being the data architect or data modeler. - mainly focused on building data models. Nice to have: Experience on working with Manufacturing /Digital Manufacturing. Nice to have: experience designing enterprise data models for analytics, specifically in a PowerBI environment Nice to have: experience with machine learning model design (Python preferred) Behaviors and Attitudes Comfortable working with ambiguity and defining a way forward. Experience challenging current ways of working A documented history of successfully driving projects to completion Excellent interpersonal skills Attention to the details. Good interpersonal and communication skills Comfortable leading others through change
Posted 1 month ago
6.0 - 9.0 years
12 - 15 Lacs
vijayawada
Work from Office
Roles & Responsibilities Develop, maintain, and optimize ETL processes using Informatica PowerCenter and other ETL tools. Design and implement data warehousing solutions (Kimball, Inmon methodologies). Work with relational databases (MySQL, Oracle, SQL Server) and NoSQL databases (MongoDB, Cassandra). Use programming/scripting for data manipulation and transformation. Ensure data accuracy, consistency, and integrity across ETL workflows. Collaborate with cross-functional teams to gather and understand data requirements. Stay updated with the latest technologies and industry best practices. Primary Skills ETL tools (Informatica PowerCenter) Strong knowledge of SQL and relational databases (MySQL, Oracle, SQL Server) Familiarity with data warehousing architectures (Kimball, Inmon) Proficiency in scripting/programming for data manipulation Secondary Skills Knowledge of NoSQL databases (MongoDB, Cassandra) Understanding of big data technologies (Amazon Redshift, Google BigQuery, Snowflake) Soft Skills Strong communication skills (proficient in English) Excellent problem-solving and critical thinking abilities Ability to work independently as well as in a team environment Attention to detail and focus on maintaining data integrity Willingness to learn and adapt to new technologies
Posted Date not available
2.0 - 7.0 years
8 - 15 Lacs
bengaluru
Hybrid
Hiring Data Engineer for our leading Investment Banking Client Location: Bangalore Experience: 2-7 Years Notice Period: Immediate Work Mode: Hybrid (10 days in a month) Interview Mode: 2 levels (1st level virtual discussion - 2nd level - F2F round - Mandatory) Mandatory skills : Master Data engineering fundamentals concepts (Data warehouse, Data Lake, Data Lakehouse) Master Golang, Bash, SQL, Python Master of HTTP and REST API Best practices Master batch and streaming datapipeline using Kafka Master code versioning with Git and best practices for continuous integration & delivery (CI/CD) Master writing clean and tested code following software engineering best practices (Readable, Modular, Reusable, Extensible) Master data modeling (3NF, Kimball, Vault) Knowledge of data orchestration using Airflow or Dagster Knowledge to self-host and manage tools like Metabase, DBT Knowledge of cloud principals and infrastructure management (IAM, Logging, Terraform, Ansible) Knowledge of data abstraction layers (Object Storage, Relational, NoSQL, Document, Trino, and Graph databases) Knowledge with Containerization and workload orchestration with (Docker, Kubernetes, Artifactory) Background in working in an agile environment (knowledge of the methods and their limits) Interested Candidates,share your resume to suvetha.b@twsol.com
Posted Date not available
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
54024 Jobs | Dublin
Wipro
24262 Jobs | Bengaluru
Accenture in India
18733 Jobs | Dublin 2
EY
17079 Jobs | London
Uplers
12548 Jobs | Ahmedabad
IBM
11704 Jobs | Armonk
Amazon
11059 Jobs | Seattle,WA
Bajaj Finserv
10656 Jobs |
Accenture services Pvt Ltd
10587 Jobs |
Oracle
10506 Jobs | Redwood City