Jobs
Interviews

40 Etlelt Processes Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

15.0 - 22.0 years

0 Lacs

karnataka

On-site

You are looking for an experienced and skilled Technical Leader to join our AWS Data Engineering practice as an AWS Data Architect. In this role, you will be responsible for driving the Data Engineering strategy, architecting scalable data solutions, and leading the implementation of data projects on AWS. Your deep understanding of AWS services and data engineering best practices will be crucial for success. Your main responsibilities will include establishing and enhancing the company's Data Engineering Services Practice, working closely with senior stakeholders to understand business needs, and delivering technical solutions. This role is ideal for a technically proficient individual who thrives in a dynamic and fast-paced environment. As a Technical Leader, you will act as a visionary leader, conduct proof-of-concept projects, make informed architectural decisions, guide and mentor engineers, collaborate with sales and marketing teams, and design and implement various data solutions. You will also oversee project implementation, engage in strategic discussions with customers, ensure data quality and security, and drive technical innovation by staying updated with the latest trends and advancements in data engineering and AWS technologies. To be successful in this role, you should have a Bachelor's or Master's degree in Engineering or Technology, 15+ years of technical hands-on experience in the Data space, and at least 4 end-to-end implementations of large-scale data projects. Proficiency in AWS data services, AWS architecture, SQL, Python, data warehousing concepts, ETL/ELT processes, and data modeling is essential. Experience with data integration, data serialization formats, data pipelines optimization, and data governance frameworks is also required. Joining our team will provide you with the opportunity to work in a high-growth startup in the AI, Decision Science, and Big Data Domain, contribute to the digital transformation of our customers, and collaborate with a diverse group of techies. Flexible working options are available to foster productivity and work-life balance.,

Posted 15 hours ago

Apply

6.0 - 10.0 years

0 Lacs

bhubaneswar

On-site

As a Data Engineer working on a contractual basis in Bhubaneswar, you should possess a solid background with 6 to 8 years of experience in the field. Your qualifications should include a degree in BE / B.Tech / MCA. Your primary responsibilities will include demonstrating strong proficiency in SQL and Snowflake, being adept at writing complex joins, CTEs, and optimizing queries for performance. You will be expected to have expertise in ETL/ELT processes, data pipeline development, and a good understanding of data modeling concepts such as Star Schema and Snowflake Schema. In addition, you should have a working knowledge of Azure services like ADF, Synapse, Blob Storage, and exposure to Big Data tools and technologies. Your role will require a sound understanding of data structures, data analysis techniques, and the ability to implement end-to-end data solutions covering ingestion, storage, and processing. Your skill set should also include strong SQL capabilities for data validation, profiling, and analysis, as well as experience in end-to-end application testing, validation, and verification. You must be capable of understanding business requirements and delivering data solutions accordingly. As a candidate, you should be able to translate business needs into data models that support long-term solutions. Reverse engineering of physical data models from databases and SQL scripts should be a part of your expertise. You should be able to analyze data-related system integration challenges and propose suitable solutions. Furthermore, you will play a key role in setting the data architecture direction, including data movement approach, architecture/technology strategy, and other data-related considerations to ensure business value. This contractual/temporary position will have a duration of 6 months and will require in-person work at the specified location in Bhubaneswar. In addition to your compensation, you will also receive benefits such as cell phone reimbursement, paid sick time, and paid time off.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

hyderabad, telangana

On-site

About the Company At Tide, we are dedicated to creating a business management platform that aims to streamline operations for small businesses, enabling them to save valuable time and resources. Our services include offering business accounts, banking solutions, as well as a range of integrated administrative tools spanning from invoicing to accounting. Established in 2017, Tide has garnered a user base of over 1 million small businesses globally, catering to SMEs in the UK, India, and Germany. Headquartered in central London, we also have offices in Sofia, Hyderabad, Delhi, Berlin, and Belgrade, with a team of more than 2,000 employees. Tide is on a trajectory of rapid growth, continuously venturing into new markets and products, and continuously seeking individuals who are enthusiastic and motivated to join us in our mission to empower small businesses by aiding them in saving time and resources. About the Role We are in search of an experienced Senior Data Engineer with exceptional skills in PySpark to join our ML/Data engineering team. This team's responsibilities encompass feature development, data quality assessments, deployment, and integration of ML models with backend services, and enhancing the overall Tide platform. As a Senior Data Engineer, you will play a crucial role in designing, developing, and optimizing our upcoming data pipelines and platforms. Your tasks will involve working with extensive datasets, addressing intricate data challenges, and contributing to the creation of robust, scalable, and efficient data solutions that drive business value. This position presents an exciting opportunity for individuals who are passionate about big data technologies, performance optimization, and constructing resilient data infrastructure. As a Data Engineer, You Will: - Focus on Performance Optimization: Identify and resolve complex performance bottlenecks in PySpark jobs and Spark clusters, utilizing Spark UI, query plans, and advanced optimization techniques. - Lead Design & Development: Spearhead the design and implementation of scalable, fault-tolerant ETL/ELT pipelines using PySpark for batch and real-time data processing. - Collaborate on Data Modeling: Work alongside data scientists, analysts, and product teams to design efficient data models for analytical and operational use cases. - Ensure Data Quality & Governance: Implement strong data quality checks, monitoring, and alerting mechanisms to maintain data accuracy, consistency, and reliability. - Contribute to Architectural Decisions: Aid in shaping the data architecture strategy, assess new technologies, and implement best practices to enhance the data platform's capabilities. - Uphold Best Practices: Promote engineering best practices, participate in code reviews, and mentor junior data engineers. - Foster Collaboration: Work closely with cross-functional teams to deliver impactful data solutions. Qualifications: - Possess 8+ years of professional experience in data engineering, with a minimum of 4+ years focusing on PySpark development in a production environment. - Demonstrate expert-level proficiency in PySpark, including Spark SQL, DataFrames, RDDs, and understanding Spark's architecture. - Showcase hands-on experience in optimizing PySpark performance, debugging slow jobs, and handling common issues in large datasets. - Exhibit strong programming skills in Python, proficiency in SQL, and familiarity with data warehousing concepts. - Prior experience with distributed data storage solutions and version control systems. - Strong problem-solving abilities, attention to detail, and excellent communication skills. - Hold a Bachelor's or Master's degree in Computer Science, Engineering, or a related field. What We Offer: - Competitive salary - Health and life insurance for self and family - OPD benefits - Mental well-being support - Learning and development budget - WFH setup allowance - Generous leave policy - Stock options Tide Ways of Working: At Tide, we embrace a flexible workplace model that accommodates both in-person and remote work to cater to the diverse needs of our teams. While we support remote work, we believe in the importance of face-to-face interactions to foster collaboration and team spirit, making our offices hubs for innovation and community building. Tide is a Place for Everyone: We promote a transparent and inclusive environment where every voice is valued and heard. Your personal data will be handled by Tide for recruitment purposes in accordance with our Recruitment Privacy Notice.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

As an MDM (Master Data Management) SaaS and On-Prem Engineer, you will be responsible for designing, implementing, and supporting robust MDM systems that ensure seamless integration between on-premises and cloud platforms. Your expertise in MDM tools, data modeling, integration, and governance will be crucial in managing complex data ecosystems effectively. Your key responsibilities will include: - Designing, implementing, and supporting MDM solutions across SaaS and on-premise platforms to maintain a unified view of master data. - Configuring and customizing MDM tools to meet business requirements and ensure compatibility across environments. - Developing and maintaining master data models for various domains such as customers, suppliers, products, and financial data. You will also be tasked with implementing and managing integrations between SaaS MDM platforms and on-premise systems, developing data pipelines, APIs, and ETL workflows, and collaborating with IT and business teams to ensure seamless data flow and system interoperability. Additionally, you will be responsible for establishing and enforcing data governance policies, implementing data quality rules and tools, monitoring the performance of MDM systems, and optimizing workflows and processes for data ingestion, cleansing, and integration. Your role will also involve collaborating with stakeholders, providing training and support to end-users, ensuring compliance with data privacy regulations, maintaining comprehensive documentation of MDM processes, and identifying opportunities for process improvements and automation. To qualify for this role, you should have at least 5 years of hands-on experience in MDM implementation and support, proficiency in MDM tools and technologies for both SaaS and on-premise environments, strong analytical and problem-solving skills, and excellent communication and collaboration abilities. Certifications in MDM tools, cloud platforms, and data governance or data quality would be advantageous. Overall, this position requires a proactive individual with a deep understanding of MDM solutions and technologies, a commitment to data integrity and system performance, and the ability to effectively communicate technical concepts to non-technical stakeholders while collaborating with cross-functional teams to deliver successful MDM solutions.,

Posted 1 week ago

Apply

7.0 - 11.0 years

0 Lacs

indore, madhya pradesh

On-site

You are the leading global provider of managed services, cybersecurity, and business transformation for mid-market financial services organizations across the globe. From your unmatched range of services, you provide stability, security, and improved business performance, freeing your clients from technology concerns and enabling them to focus on running their businesses. More than 1,000 customers worldwide with over $3 trillion of assets under management put their trust in you. You believe that success is driven by passion and purpose. Your passion for technology is only surpassed by your commitment to empowering your employees around the world. You have an exciting Opportunity for a Cloud Data Engineer. This full-time position is open for an experienced Senior Data Engineer that will support several of your clients" systems. Client satisfaction is your primary objective; all available positions are customer-facing requiring excellent communication and people skills. A positive attitude, rigorous work habits, and professionalism in the workplace are a must. Fluency in English, both written and verbal, is required. This is an Onsite role. As a senior cloud data engineer with 7+ years of experience, you will have strong knowledge and hands-on experience with Azure data services such as Azure Data Factory, Azure Synapse Analytics, Azure SQL Database, Azure Data Lake, Logic apps, Azure Synapse Analytics, Apache Spark, and Snowflake Datawarehouse, Azure Fabric. It is good to have experience with Azure Databricks, Azure Cosmos DB, Azure AI, and developing cloud-based applications. You should be able to analyze problems and provide solutions, design, implement, and manage data warehouse solutions using Azure Synapse Analytics or similar technologies, migrate data from On-Premises to Cloud, and proficiency in data modeling techniques. Your responsibilities include designing and developing ETL/ELT processes to move data between systems and transform data for analytics, strong programming skills in languages such as SQL, Python, or Scala, developing and maintaining data pipelines, experience in at least one of the reporting tools such as Power BI/Tableau, working effectively in a team environment, communicating complex technical concepts to non-technical stakeholders, managing and optimizing databases, understanding business requirements, converting them to technical design for implementation, performing analysis, developing and testing code, designing and developing cloud-based applications using Python on a serverless framework, troubleshooting skills, creating, maintaining, and enhancing applications, working independently as an individual contributor, and following Agile Methodology (SCRUM). You have experience in developing cloud-based data applications, hands-on experience in Azure data services, data warehousing, ETL, understanding cloud architecture principles, best practices, developing pipelines using ADF, Synapse, migrating data from On-Premises to Cloud, writing complex SQL scripts, transformations, analyzing problems, providing solutions, knowledge in CI/CD pipelines, Python, and API Gateway. Product Management/BA experience is a nice-to-have. Your culture is all about connection - connection with your clients, your technology, and most importantly with each other. In addition to working with an amazing team around the world, you also offer a competitive compensation package. If someone believes they would be a great fit and are ready for their best job ever, you would like to hear from them. Love Your Job, Share Your Technology Passion, Create Your Future Here!,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

nagpur, maharashtra

On-site

As a Data Architect at our company, you will be responsible for designing scalable data architectures for web-based platforms or cloud-native systems. Your role will involve hands-on experience with relational and NoSQL databases such as PostgreSQL, MongoDB, and Cassandra. Additionally, you will work with cloud-based data services, data pipelines, and orchestration tools like Azure Data Services, AWS, GCP, Apache Airflow, and Azure Data Factory. In this role, you will have the opportunity to utilize your expertise in Big Data technologies including Spark, Kafka, and Delta Lake. A deep understanding of data modeling, ETL/ELT processes, and data lifecycle management will be crucial to your success in this position. Familiarity with cybersecurity, log/event data formats (e.g., syslog, JSON, STIX), and security telemetry is considered a strong advantage. Your responsibilities will include defining the data architecture and strategy for the CMP, ensuring alignment with product requirements and security standards. You will design and implement data models, data flows, and integration patterns for structured, semi-structured, and unstructured data. Collaboration with DevOps, engineering, and security teams will be essential to build scalable data pipelines and ensure real-time and batch processing capabilities. Moreover, you will be expected to select and integrate appropriate data storage and analytics technologies such as relational databases, data lakes, NoSQL, and time-series databases. Ensuring compliance with data governance, privacy, and security best practices will be a key aspect of your role. You will also establish data quality frameworks, metadata management, and lineage tracking to support analytics and reporting use cases with robust data architecture foundations. At our company, we offer a culture of caring where people come first. You will experience an inclusive culture of acceptance and belonging, building meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. We are committed to your continuous learning and development, providing numerous opportunities to grow personally and professionally. You will have the chance to work on projects that matter, collaborating with clients globally to engineer impactful solutions. We believe in the importance of balance and flexibility, offering various career areas, roles, and work arrangements to help you achieve a work-life balance. As a high-trust organization, integrity is key, and you can trust GlobalLogic as a safe, reliable, and ethical global company. Join us in shaping the digital revolution, transforming businesses, and redefining industries through intelligent products, platforms, and services.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

delhi

On-site

You will be joining Brainwork Techno solutions Pvt. Ltd. as a GCP Data Engineer and leveraging your expertise in Data Engineering. Your responsibilities will include designing, developing, and implementing robust data pipelines using Python, SQL, BigQuery, and orchestration tools like Airflow. Your focus will be on building and optimizing data pipelines for efficient data ingestion, transformation, and loading while also automating data workflows to ensure data quality and reliability. In addition, you will be designing and building data marts to support business intelligence and reporting needs, implementing data warehousing best practices, and optimizing data models and schemas for performance and scalability. You will play a crucial role in building business-critical reports, developing data visualizations and dashboards, and collaborating with stakeholders to deliver actionable insights. Your role will also involve implementing data governance policies, ensuring data security and compliance, and managing data quality and metadata. You will participate in data migration projects, optimize GCP resources for cost efficiency and performance, and collaborate closely with business stakeholders to understand data requirements and provide effective solutions. To excel in this role, you should have a strong proficiency in BigQuery, experience with Cloud Storage, knowledge of orchestration tools like Cloud Composer (Airflow), proficiency in Python and SQL, understanding of data warehousing concepts, experience with ETL/ELT processes, and knowledge of data modeling and data quality management. Excellent problem-solving and analytical skills, strong communication and collaboration abilities, and the capacity to work independently in a remote environment are essential for success in this position.,

Posted 2 weeks ago

Apply

10.0 - 14.0 years

0 Lacs

hyderabad, telangana

On-site

The Data Product Analyst, as a part of the global Digital Data & Analytics team at Opella, is entrusted with the responsibility of facilitating data-driven decision-making processes within the Manufacturing, Maintenance, External Manufacturing, and Product Portfolio domains. This role primarily involves designing semantic layers in Snowflake, generating insights through Power BI dashboards (KPIs) and ad-hoc reporting, and ensuring the accuracy and quality of data integration across various systems. Acting as a crucial intermediary between business functions, data engineers, and architects, the analyst plays a vital role in connecting business insights with enterprise data solutions. Responsibilities include collaborating seamlessly with the Manufacturing, Maintenance, and Portfolio Business & Digital functional teams to comprehend business processes, data sources, and reporting requirements. The role also involves managing multiple scope items concurrently, accurately estimating efforts, and setting priorities effectively. Moreover, the analyst is responsible for designing and implementing semantic layers in Snowflake, developing ad-hoc reports, building domain-specific dashboards using Power BI, and ensuring data integrity and compliance within the M&S domain. Additionally, the Data Product Analyst should possess a deep understanding of Manufacturing, Maintenance, External Manufacturing, and Product Portfolio business processes, including key KPIs in these domains. Proficiency in data architecture, data modeling, SAP modules (such as MM, PP, PM, PLM), MES systems, Power BI, Snowflake, and data integration collaboration is essential. The ideal candidate should have experience in system integration testing, UAT, and a bachelor's or master's degree in business, Engineering, Computer Science, Data Science, or a related field. With at least 10 years of experience in data and analytics for manufacturing and supply chain roles, the candidate should also exhibit excellent soft skills like team collaboration, cultural sensitivity, service orientation, communication, initiative, and problem-solving abilities. Join Opella to embark on a journey of challenging and purposeful work, empowered to develop consumer brands with passion and creativity. Be a part of a bold, collaborative, and inclusive culture where individuals can thrive and excel every day. Embrace the challenger spirit at Opella and contribute to simplifying self-care for a healthier society and planet. For more information and to explore career opportunities, visit www.opella.com/en/careers.,

Posted 2 weeks ago

Apply

12.0 - 17.0 years

0 Lacs

pune, maharashtra

On-site

You are seeking a skilled Azure Databricks Architect with 12 to 17 years of experience in Python, SQL. As an Azure Databricks Architect at our company, you will be responsible for data architecture, data engineering, and analytics. You should have at least 5 years of hands-on experience with Azure Databricks, Apache Spark, and Delta Lake. Your proficiency in Azure Data Lake, Azure Synapse, Azure Data Factory, and Azure SQL is essential. Expertise in Python, Scala, and SQL for data processing, along with a deep understanding of data modeling, ETL/ELT processes, and distributed computing, are key requirements. Experience with CI/CD pipelines and DevOps practices in data engineering is also expected. Excellent communication and stakeholder management skills are crucial for this role. Possessing Azure certifications such as Azure Solutions Architect or Azure Data Engineer would be a plus. Your responsibilities will include implementing ML/AI models in Databricks, utilizing data governance tools like Purview, and working with real-time data processing using Kafka, Event Hubs, or Stream Analytics. Additionally, you will enjoy competitive salary and benefits, a culture focused on talent development, and opportunities to work with cutting-edge technologies. Employee engagement initiatives, annual health check-ups, and insurance coverage are also part of the benefits package. Persistent Ltd. is committed to fostering diversity and inclusion in the workplace. We welcome applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. Hybrid work options, flexible working hours, and accessible facilities are available to support employees with diverse needs and preferences. Our inclusive environment aims to enable all employees to thrive while accelerating growth both professionally and personally, impacting the world in positive ways, and enjoying collaborative innovation with diversity and work-life wellbeing at the core. If you are ready to unleash your full potential at Persistent, please contact pratyaksha_pandit@persistent.com. Persistent is an Equal Opportunity Employer that prohibits discrimination and harassment of any kind.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

We are looking for an experienced Data Modeller with a specialization in designing and implementing data models for modern data platforms. Your role will entail a deep understanding of data modeling techniques, particularly in healthcare data structures, and expertise in the Databricks Lakehouse architecture. The ideal candidate will have a track record of successfully translating complex business requirements into efficient and scalable data models to support analytics and reporting needs. As a Data Modeller, your main responsibility will be to design and implement logical and physical data models for our Databricks-based Modern Data Platform. Working closely with business stakeholders, data architects, and data engineers, you will create models that facilitate the migration from legacy systems to the Databricks Lakehouse architecture. Your focus will be on ensuring data integrity, performance, and compliance with healthcare industry standards. Key Responsibilities - Design and implement logical and physical data models for Databricks Lakehouse implementations - Translate business requirements into efficient, scalable data models - Create and maintain data dictionaries, entity relationship diagrams, and model documentation - Develop dimensional models, data vault models, and other modeling approaches as necessary - Support the migration of data models to the Databricks platform - Collaborate with data architects to ensure alignment with overall data architecture - Work with data engineers to implement and optimize data models - Ensure compliance of data models with healthcare industry regulations and standards - Implement best practices and standards for data modeling - Provide guidance on data modeling techniques and approaches - Participate in data governance initiatives and data quality assessments - Stay updated with evolving data modeling techniques and industry trends Qualifications - Extensive experience in data modeling for analytics and reporting systems - Strong knowledge of dimensional modeling, data vault, and other methodologies - Experience with Databricks platform and Delta Lake architecture - Expertise in healthcare data modeling and industry standards - Experience in migrating data models from legacy systems to modern platforms - Strong SQL skills and familiarity with data definition languages - Understanding of data governance principles and practices - Experience with data modeling tools and technologies - Knowledge of performance optimization techniques for data models - Bachelor's degree in Computer Science, Information Systems, or related field; advanced degree preferred - Professional certifications in data modeling or related areas Technical Skills - Data modeling methodologies (dimensional, data vault, etc.) - Databricks platform and Delta Lake - SQL and data definition languages - Data modeling tools (erwin, ER/Studio, etc.) - Data warehousing concepts and principles - ETL/ELT processes and data integration - Performance tuning for data models - Metadata management and data cataloging - Cloud platforms (AWS, Azure, GCP) - Big data technologies and distributed computing Healthcare Industry Knowledge - Healthcare data structures and relationships - Healthcare terminology and coding systems (ICD, CPT, SNOMED, etc.) - Healthcare data standards (HL7, FHIR, etc.) - Healthcare analytics use cases and requirements - Optionally Healthcare regulatory requirements (HIPAA, HITECH, etc.) - Clinical and operational data modeling challenges - Population health and value-based care data needs Personal Attributes - Strong analytical and problem-solving skills - Excellent attention to detail and focus on data quality - Ability to translate complex business requirements into technical solutions - Effective communication with both technical and non-technical stakeholders - Collaborative approach to working with cross-functional teams - Self-motivated and able to work independently - Continuous learner staying current with industry trends What We Offer - Opportunity to design data models for cutting-edge healthcare analytics - Collaborative and innovative work environment - Competitive compensation package - Professional development opportunities - Work with leading technologies in the data space This position requires a unique combination of data modeling expertise, technical knowledge, and understanding of the healthcare industry. The ideal candidate will have a proven track record of designing efficient data models and a passion for creating data structures that drive powerful analytics and insights.,

Posted 2 weeks ago

Apply

5.0 - 15.0 years

0 Lacs

karnataka

On-site

As a Senior eCommerce Data Analyst Consultant with over 15 years of experience, you will play a crucial role in our data analytics organization. Your responsibilities will include conducting advanced data analysis, providing actionable insights, designing scalable data solutions, and guiding the strategic direction of our analytics capabilities. Leveraging your deep understanding of B2B eCommerce, you will drive significant business impact and empower data-driven decision-making across the company. In addition, you will mentor junior analysts and collaborate closely with engineering teams to build a robust and efficient data infrastructure. Your key responsibilities will involve leading data analysis initiatives, spearheading complex projects such as ecommerce clickstream and user behavior analysis, and identifying strategic opportunities for influencing business decisions and product strategy. You will be tasked with designing and implementing scalable data models and data warehousing solutions within GCP BigQuery to support our growing analytics needs. Providing technical guidance and mentorship to junior data analysts will be essential in fostering their growth and development in data analysis techniques, tools, and best practices. Collaborating with product management and engineering teams, you will define and prioritize data-related requirements for the product roadmap. You will work directly with the customer ecommerce product team to articulate and present analytical findings, as well as brainstorm ideas for the product roadmap. Advanced clickstream and user behavior analysis using Adobe Analytics will be a key part of your role, helping optimize conversion funnels and personalize user experiences. You will define frameworks for monitoring the performance of our ecommerce platform and key business metrics, proactively identifying areas for optimization and improvement. Collaborating with data engineering teams, you will design, build, and maintain robust and reliable data pipelines that feed our analytics platforms. Advocating for data quality and governance, you will establish and enforce data quality standards and governance policies to ensure the accuracy, consistency, and integrity of our data assets. Additionally, you will research and evaluate emerging data analytics technologies and tools, recommending and implementing solutions that enhance our analytical capabilities and efficiency. Effectively communicating complex data insights and technical solutions to both technical and non-technical audiences, including senior leadership, through compelling visualizations and presentations will be a key aspect of your role. To qualify for this position, you should hold a Bachelor's or Master's degree in Computer Science, Data Science, Statistics, Mathematics, Engineering, or a related quantitative field. You should have extensive experience as a Data Analyst, with a significant focus on ecommerce analytics and architectural responsibilities. Expert-level proficiency in Adobe Analytics, mastery of SQL, advanced programming skills in Python, and experience in designing and implementing data models and data warehousing solutions in a cloud environment are required. Strong analytical, communication, and interpersonal skills, along with the ability to mentor and guide junior team members, will be essential for success in this role. If you are passionate about leveraging data analytics to drive business impact and are looking to join a dynamic team at LTIMindtree, apply now to be a part of our global technology consulting and digital solutions company.,

Posted 3 weeks ago

Apply

7.0 - 11.0 years

0 Lacs

karnataka

On-site

As a Data Warehouse Developer, you will play a crucial role in designing, building, and enhancing our client's online platform. Your responsibilities will include researching, suggesting, and implementing new technology solutions in line with best practices and standards. You will also be accountable for ensuring the resiliency and availability of various products while actively contributing to the team's productivity. Your expertise should encompass over 7 years of practical experience in designing and constructing functions and stored procedures using Oracle Data Integrator (ODI). You will be tasked with creating data warehouse schemas, including fact and dimension tables, and documenting them comprehensively. Collaborating with DBAs, you will develop and execute table creation scripts, analyze diverse data sources to establish data relationships as per Business Requirements Documents (BRDs), and possess a deep understanding of data quality, ETL/ELT processes, and common transformation patterns. Furthermore, your role will involve designing and implementing ELT workflows to load data from source systems into staging environments, and subsequently into target models leveraging ODI. Conducting data validation using advanced SQL queries and data profiling techniques will be a key aspect of your responsibilities. Demonstrating a solid grasp of data governance concepts, tools, and best practices, you will be adept at data quality analysis, including assessing accuracy, completeness, and consistency through query composition. Your skill set should encompass strong analytical capabilities, effective research skills, and adept problem-solving abilities. Excellent written and verbal communication skills are essential, along with the flexibility to work in rotational shifts. In return, you can look forward to working in a challenging and innovative environment that offers ample opportunities for learning and growth as needed.,

Posted 3 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

ahmedabad, gujarat

On-site

You will be responsible for designing, developing, and maintaining scalable data pipelines using Azure Databricks. Your role will involve building and optimizing ETL/ELT processes for structured and unstructured data, collaborating with data scientists, analysts, and business stakeholders, integrating Databricks with Azure Data Lake, Synapse, Data Factory, and Blob Storage, developing real-time data streaming pipelines, and managing data models/data warehouses. Additionally, you will optimize performance, manage resources, ensure cost efficiency, implement best practices for data governance, security, and quality, troubleshoot and improve existing data workflows, contribute to architecture and technology strategy, mentor junior team members, and maintain documentation. To excel in this role, you should have a Bachelor's/Master's degree in Computer Science, IT, or a related field, along with 5+ years of Data Engineering experience (minimum 2+ years with Databricks). Strong expertise in Azure cloud services (Data Lake, Synapse, Data Factory), proficiency in Spark (PySpark/Scala) and big data processing, experience with Delta Lake, Structured Streaming, and real-time pipelines, strong SQL skills, an understanding of data modeling and warehousing, familiarity with DevOps tools like CI/CD, Git, Terraform, Azure DevOps, excellent problem-solving and communication skills are essential. Preferred qualifications include Databricks Certified (Associate/Professional), experience with machine learning workflows on Databricks, knowledge of data governance tools like Purview, experience with REST APIs, Kafka, Event Hubs, cloud performance tuning, and cost optimization experience. Join us to be a part of a supportive and collaborative team, work with a growing company in the exciting BI and Data industry, enjoy a competitive salary and performance-based bonuses, and have opportunities for professional growth and development. If you are interested in this opportunity, please send your resume to hr@exillar.com and fill out the form at https://forms.office.com/r/HdzMNTaagw.,

Posted 3 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

hyderabad, telangana

On-site

You should possess 8-10 years of Data and Analytics experience, including at least 5 years in Azure Data Factory and a minimum of 2 years in Databricks. Your communication and presentation skills should be excellent. It is required to have mastery in Data Engineering and Data Analysis, along with being Data Bricks Certified. You should have expertise in ETL/ELT processes, specifically in Azure Data Factory, Synapse Pipelines, and data transformation tasks like deduplication, filtering, sorting, etc. Proficiency in SQL and programming languages like Python is expected. As part of your skill set, you should be proficient in Azure Data Engineering Tools such as ADLS, SQL DB, Data Factory, Databricks, and Key Vault. Experience in job scheduling, automation, and orchestration using Azure Logic Apps, Functions, or any other ETL scheduler is essential. Designing and developing production data pipelines within a big data architecture, using Python, Scala, and having a good understanding of Delta Lake is crucial. Experience in building and delivering data analytics solutions using Azure is required. You should be familiar with project delivery methodologies like Waterfall, Agile, Scrum, and have experience in managing both internal and external stakeholders. Additionally, working experience with reporting tools like Power BI, Tableau, etc., will be an advantage.,

Posted 3 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

pune, maharashtra

On-site

Aera Technology is revolutionizing enterprise decision-making. Our AI-driven platform, Aera Decision Cloud, integrates seamlessly with existing systems to digitize, augment, and automate critical business decisions in real-time. Aera helps global enterprises transform decision-making delivering millions of recommendations that have resulted in significant revenue gains and cost savings for some of the world's best-known brands. We are looking for a Product Manager - Data to lead the evolution of our core Decision Intelligence capabilities. You will redefine how organizations harness data and AI to drive smarter, faster, and more sustainable decision-making. This is an exciting opportunity to be at the forefront of enterprise AI innovation, collaborating with a dynamic team in a fast-paced, startup-like environment. This role will be based in our Pune office. Responsibilities As a Product Manager, you will own the strategy, development, and execution of key platform components required for building a Decision Data Model which enables enterprises to build powerful AI-driven workflows. Lead product strategy & execution: Define and drive priorities, roadmap, and development efforts to maximize business value. Understand market needs: Research target users, use cases, and feedback to refine features and address customer pain points. Analyze competitive landscape: Stay ahead of industry trends and competitors to inform product differentiation. Define product requirements: Work closely with designers and engineers to develop user-centric, scalable solutions. Collaborate cross-functionally: Partner with Customer Success, Engineering, and Executive teams to align on vision and priorities. Drive user adoption: Act as the go-to expert, ensuring internal teams are equipped with the knowledge and resources to enable customers. About You You are passionate - you are your product's biggest advocate, and its biggest critic. You will ceaselessly pursue excellence and do whatever it takes to deliver a product that users love and that delivers value. You are pragmatic - you know when to focus on nuanced details, and when to bring a more strategic perspective to the table. You love to learn - you continually gather new information, ideas, and feedback, and you seek to understand the root of an issue, in order to identify an optimal solution. You are a master at communication and collaboration - not only can you communicate a compelling vision or a complex concept, but you also know how to motivate a team to collaborate around a problem and work toward a common goal. Experience At least 2 yrs of B2B SaaS PM experience Mandatory. Experience in data infrastructure, AI/ML platforms, or enterprise data products. Knowledge of data modeling, SQL, and ETL/ELT processes. Knowledge of data quality, metadata management, data lineage, and observability is a plus. Bachelor's degree in Engineering/Computer Science or a related technical discipline. If you share our passion for building a sustainable, intelligent, and efficient world, you're in the right place. Established in 2017 and headquartered in Mountain View, California, we're a series D start-up, with teams in Mountain View, San Francisco (California), Bucharest and Cluj-Napoca (Romania), Paris (France), Munich (Germany), London (UK), Pune (India), and Sydney (Australia). So join us, and let's build this! Benefits Summary At Aera Technology, we strive to support our Aeranauts and their loved ones through different stages of life with a variety of attractive benefits and great perks. In addition to offering a competitive salary and company stock options, we have other great benefits available. You'll find comprehensive medical, Group Medical Insurance, Term Insurance, Accidental Insurance, paid time off, Maternity leave, and much more. We offer unlimited access to online professional courses for both professional and personal development, coupled with people manager development programs. We believe in a flexible working environment to allow our Aeranauts to perform at their best, ensuring a healthy work-life balance. When you're working from the office, you'll also have access to a fully-stocked kitchen with a selection of snacks and beverages.,

Posted 3 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

pune, maharashtra

On-site

The ideal candidate for this position should have 8-12 years of experience and possess a strong understanding and hands-on experience with Microsoft Fabric. You will be responsible for designing and implementing end-to-end data solutions on Microsoft Azure, which includes data lakes, data warehouses, and ETL/ELT processes. Your role will involve developing scalable and efficient data architectures to support large-scale data processing and analytics workloads. Ensuring high performance, security, and compliance within Azure data solutions will be a key aspect of this role. You should have knowledge of various techniques such as lakehouse and warehouse, along with experience in implementing them. Additionally, you will be required to evaluate and select appropriate Azure services like Azure SQL Database, Azure Synapse Analytics, Azure Data Lake Storage, Azure Databricks, Unity Catalog, and Azure Data Factory. Deep knowledge and hands-on experience with these Azure Data Services are essential. Collaborating closely with business and technical teams to understand and translate data needs into robust and scalable data architecture solutions will be part of your responsibilities. You should also have experience in data governance, data privacy, and compliance requirements. Excellent communication and interpersonal skills are necessary for effective collaboration with cross-functional teams. In this role, you will provide expertise and leadership to the development team implementing data engineering solutions. Working with Data Scientists, Analysts, and other stakeholders to ensure data architectures align with business goals and data analysis requirements is crucial. Optimizing cloud-based data infrastructure for performance, cost-effectiveness, and scalability will be another key responsibility. Experience in programming languages like SQL, Python, and Scala is required. Hands-on experience with MS SQL Server, Oracle, or similar RDBMS platforms is preferred. Familiarity with Azure DevOps and CI/CD pipeline development is beneficial. An in-depth understanding of database structure principles and distributed data processing of big data batch or streaming pipelines is essential. Knowledge of data visualization tools such as Power BI and Tableau, along with data modeling and strong analytics skills is expected. The candidate should be able to convert OLTP data structures into Star Schema and ideally have DBT experience along with data modeling experience. A problem-solving attitude, self-motivation, attention to detail, and effective task prioritization are essential qualities for this role. At Hitachi, attitude and aptitude are highly valued as collaboration is key. While not all skills are required, experience with Azure SQL Data Warehouse, Azure Data Factory, Azure Data Lake, Azure Analysis Services, Databricks/Spark, Python or Scala, data modeling, Power BI, and database migration are desirable. Designing conceptual, logical, and physical data models using tools like ER Studio and Erwin is a plus.,

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

jaipur, rajasthan

On-site

Apply Digital is a global digital transformation partner for change agents. Leveraging expertise that spans Business Transformation Strategy, Product Design & Development, Commerce, Platform Engineering, Data Intelligence, Marketing Services, Change Management, and beyond, we enable our clients to modernize their organizations and deliver meaningful impact to their business and customers. Our 750+ team members have helped transform global companies like Kraft Heinz, NFL, Moderna, Lululemon, Dropbox, Atlassian, A+E Networks, and The Very Group. Apply Digital was founded in 2016 in Vancouver, Canada, and has grown to nine cities across North America, South America, the UK, and Europe. At Apply Digital, the One Team approach is believed in, where operations are within a pod structure. Each pod combines senior leadership, subject matter experts, and cross-functional skill sets, all working within a common tech and delivery framework. The structure is supported by well-organized scrum and sprint cadences, ensuring teams release often and hold retrospectives to progress towards desired outcomes. Wherever Apply Digital operates globally, it envisions a safe, empowered, respectful, and fun community for its people every single day. The organization works to embody its SHAPE values (smart, humble, active, positive, and excellent) to create a space for the team to connect, grow, and support each other to make a difference. Apply Digital is a hybrid-friendly organization with remote options available if needed. The preferred candidate for the Analytics Implementation Consultant role should be based in or within a location commutable to the Delhi/NCR region of India, working in hours that overlap with the Eastern Standard Timezone (EST). In the initial role, the Analytics Implementation Consultant will support Kraft Heinz, a global leader in consumer packaged foods. Apply Digital aims to drive Kraft Heinz's digital transformation through implementable strategies, cutting-edge technology, and data-driven innovation to enhance consumer engagement and maximize business value. The role involves designing, implementing, and maintaining digital analytics solutions in collaboration with developers, data engineers, and product teams to ensure scalable and reliable data collection. Expertise in digital analytics platforms, tag management systems, JavaScript, SQL, and data layers is required, along with strong English language proficiency and experience working with remote teams. Responsibilities of the Analytics Implementation Consultant include supporting the development and implementation of robust analytics systems, collaborating with analysts and stakeholders to translate business problems into analytics solutions, QA testing data capture and reports, supporting the creation of presentations and recommendations, staying updated on technical trends and best practices, and contributing to the direction of the Data & Analytics Discipline. The ideal candidate for this role should have a strong proficiency in English, experience working with remote teams, 5+ years of analytics implementation experience, expertise with analytics platforms and tag management systems, front-end web development skills, experience with customer data platforms and mobile app analytics, understanding of statistical analysis and machine learning concepts, familiarity with data modeling and architecture principles, and the ability to manage multiple projects concurrently. A bachelor's degree in Computer Science, Data Science, Analytics, or Engineering is required. Experience with optimization tools is a plus. Apply Digital offers a hybrid-friendly work environment, comprehensive benefits including healthcare coverage and contributions to Provident fund, a gratuity bonus after five years of service, flexible PTO, engaging projects with international brands, an inclusive and safe workplace, generous training budgets, and a commitment to diversity, equity, and inclusion. Apply Digital values equal opportunity and nurtures an inclusive workplace where individual differences are recognized and celebrated. For more information, visit Apply Digital's Diversity, Equity, and Inclusion (DEI) page. Special needs or accommodations during the recruitment process can be requested by emailing india.careers@applydigital.com.,

Posted 3 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

At PwC, our people in managed services focus on a variety of outsourced solutions and support clients across numerous functions. These individuals help organisations streamline their operations, reduce costs, and improve efficiency by managing key processes and functions on their behalf. They are skilled in project management, technology, and process optimization to deliver high-quality services to clients. Those in managed service management and strategy at PwC will focus on transitioning and running services, along with managing delivery teams, programmes, commercials, performance and delivery risk. Your work will involve the process of continuous improvement and optimizing of the managed services process, tools, and services. We are seeking a highly skilled Technical Lead with expertise in TIBCO Business Works, TIBCO Business Events, TIBCO Data virtualization, Enterprise Service Bus (ESB), TIBCO HAWK technologies. The Lead candidate requires Managing offshore tea, co-ordinating with Onsite Tech leads, skills on design documents, excel trackers and design and development of CEP and Integration applications. Knowledge or experience on Tibco Data Virtualization. Design and development of Complex event processing solutions using TIBCO Business Events 5.x and above, Development of Rules, Rule Functions, Decision Tables, Event Streaming, etc., Design and development of Integration applications using TIBCO Business Works 5.x and above, TIBCO EMS, KAFKA, Java Programming technologies. Ability to handle complete SDLC Life cycle i.e development, bug fix, enhancements, deployments. Provide technical leadership and guidance to a team of software engineers, ensuring the use of best practices and adherence to quality standards. Oversee the end-to-end integration process, from initial design through to implementation and maintenance. Work closely with cross-functional teams to ensure seamless integration and deployment of solutions. Troubleshoot and resolve complex technical issues, providing expert-level support for TIBCO-based systems. Maintain and update technical documentation, including architecture diagrams, design specifications, and operational procedures. Monitor system performance and recommend improvements to enhance efficiency and reliability. Stay informed about industry trends and emerging technologies in integration, recommending new tools and practices as appropriate. Qualifications Bachelors or Masters degree in Computer Science, Information Technology, or a related field. 5+ years of experience in software engineering, with a focus on integration solutions using TIBCO Business Works 5.x and above, TIBCO Business Events 5.x and above, TIBCO EMS, TIBCO HAWK. Experience in messaging solutions using TIBCO EMS, MQ, other JMS providers. Experience in TIBCO Data Virtualization. Experience/Strong Knowledge of Java, KAFKA streaming is preferable. Proven experience in developing and implementing RESTful/SOAP services and other integration protocols. Demonstrated experience leading technical teams and managing complex integration projects. Excellent problem-solving skills with the ability to handle multiple tasks and projects simultaneously. Strong verbal and written communication skills, with the ability to effectively interact with technical and non-technical stakeholders. Familiarity with Agile/Scrum methodologies is a plus. Preferred Qualifications Professional TIBCO certification(s). TIBCO Business Works 5.x and above, TIBCO Business Events 5.x and above are mandatory. Experience/Knowledge with cloud platforms and services. Previous experience working in a globally distributed team environment. Benefits Competitive salary and performance-based bonuses. Comprehensive healthcare benefits, including medical, dental, and vision coverage. Opportunities for professional development and career progression. Flexible working arrangements, including potential for remote work. A career in our Managed Services team will provide you with an opportunity to collaborate with a wide array of teams to help our clients implement and operate new capabilities, achieve operational efficiencies, and harness the power of technology. Our Data, Testing & Analytics as a Service team brings a unique combination of industry expertise, technology, data management, and managed services experience to create sustained outcomes for our clients and improve business performance. We empower companies to transform their approach to analytics and insights while building your skills in exciting new directions. Have a voice at our table to help design, build, and operate the next generation of software and services that manage interactions across all aspects of the value chain. Candidate is expected to demonstrate extensive knowledge and/or a proven record of success in the following areas: Should have a minimum of 6 years hand on experience building advanced Data warehousing solutions on leading relational or cloud platforms. Should have a minimum of 3-5 years of Operate/Managed Services/Production Support Experience. Should have extensive experience in developing scalable, repeatable, and secure data structures and pipelines to ingest, store, collect, standardize, and integrate data for downstream consumption like Business Intelligence systems, Analytics modeling, Data scientists, etc. Designing and implementing data pipelines to extract, transform, and load (ETL/ELT) data from various sources into data storage systems, such as data warehouses or data lakes. Should have experience in building efficient ETL/ELT processes using Informatica 10.x or above. Informatica BDM is a plus. Troubleshoot and resolve issues related to data quality, data load failures, and performance bottlenecks using Informatica and SQL/PL-SQL. Implementing data validation and cleansing procedures will ensure the quality, integrity, and dependability of the data. Improve the scalability, efficiency, and cost-effectiveness of data pipelines. Write complex SQL queries for large-scale data extraction, transformation, and reporting. Implementing and maintaining data security and privacy measures, including access controls and encryption, to protect sensitive data. Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Should have experience in Building and maintaining Data Governance solutions (Data Quality, Metadata management, Lineage, Master Data Management, and Data security) using industry-leading tools. Scaling and optimizing schema and performance tuning SQL and ETL/ELT pipelines in data lake and data warehouse environments. Should have Hands-on experience with Data analytics tools like Informatica, SSIS, Alteryx, etc. Should have Experience of ITIL processes like Incident management, Problem Management, Knowledge management, Release management, Data DevOps, etc. Should have Strong communication, problem-solving, quantitative, and analytical abilities. Nice To Have Informatica Power Centre certification.,

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

The Vontiers Data & Analytics Hub is in search of an experienced Snowflake Data Engineer to become a valuable member of our team. As a Data Engineer, you will play a crucial role in the creation, enhancement, and upkeep of data pipelines and data models using the Snowflake cloud data platform. Your primary responsibilities will include designing, developing, and maintaining data pipelines and data models utilizing the Snowflake cloud data platform. You will also be responsible for implementing best practices related to data quality, security, and performance. Collaboration with data analysts, data scientists, and business stakeholders to grasp data requirements and provide solutions is a key aspect of this role. Additionally, offering technical guidance and mentorship to junior data engineers and staying abreast of the latest trends and technologies in the data engineering field are part of your duties. To qualify for this position, you should hold a Bachelor's degree in computer science, engineering, or a related field along with a minimum of 5 years of experience in data engineering, preferably within a cloud environment. Proficiency in SQL and Python, familiarity with cloud platforms like AWS or Azure, and hands-on experience with Snowflake data warehouse are essential requirements. You should also have expertise in ETL/ELT processes, data modeling, data warehousing concepts, and performance tuning in Snowflake. It would be advantageous if you possess certifications in Snowflake, experience with data visualization tools such as Power BI, and familiarity with finance, procurement, and manufacturing processes. Additionally, knowledge of MLOps and decision sciences applications utilizing Snowpark compute, Snowflake Model registries, and Snowflake Feature registries would be beneficial. Vontier (NYSE: VNT) is a global industrial technology company that integrates productivity, automation, and multi-energy technologies to cater to the needs of an evolving mobility ecosystem. With a culture focused on continuous improvement and innovation, Vontier provides a dynamic, innovative, and inclusive environment where personal growth, work-life balance, and collaboration are valued. Join us in our commitment to enabling the way the world moves by contributing to meaningful change and driving innovation personally and professionally. Let's navigate challenges and seize opportunities together at Vontier!,

Posted 4 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

pune, maharashtra

On-site

We are looking for an experienced and skilled Azure Data Engineer to join our team at Creant for a contract-based position in Pune. As an Azure Data Engineer, you will be responsible for designing, developing, and implementing data analytics and data warehouse solutions using Azure Data Platform. You will collaborate closely with business stakeholders, data architects, and technical teams to ensure efficient data integration, transformation, and availability. Your key responsibilities will include designing, developing, and implementing data warehouse and data analytics solutions leveraging Azure Data Platform. You will create and manage data pipelines using Azure Data Factory (ADF) and Azure Data Bricks, and work extensively with Azure AppInsights, Dataverse, and PowerCAT Tools to ensure efficient data processing and integration. Additionally, you will implement and manage data storage solutions using Azure SQL Database and other Azure data services. Designing and developing Logic Apps, Azure Function Apps for data processing, orchestration, and automation will also be part of your role. You will collaborate with cross-functional teams to understand business requirements and translate them into scalable data solutions. Performing data validation, quality checks, and ensuring data consistency across systems will be essential. You will also be responsible for monitoring, troubleshooting, and optimizing data solutions for performance, scalability, and security, as well as preparing technical documentation and supporting project handover to operations teams. The primary skills required for this role include: - Strong experience as a Data Engineer with 6 to 10 years of relevant experience. - Expertise in Azure Data Engineering services such as Azure AppInsights, Dataverse, PowerCAT Tools, Azure Data Factory (ADF), Azure Data Bricks, Azure SQL Database, Azure Function Apps, and Azure Logic Apps. - Proficiency in ETL/ELT processes, data integration, and data migration. - Solid understanding of Data Warehouse Architecture and data modeling principles. - Experience in working on large-scale data platforms and handling complex data workflows. - Familiarity with Azure Analytics Services and related data tools. - Strong knowledge of SQL, Python, or Scala for data manipulation and processing. Preferred skills for this role include knowledge of Azure Synapse Analytics, Cosmos DB, and Azure Monitor, a good understanding of data governance, security, and compliance aspects, as well as strong problem-solving, troubleshooting, communication, and stakeholder management skills.,

Posted 4 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Senior Python Engineer at our company, you will leverage your deep expertise in data engineering and API development to drive technical excellence and autonomy. Your primary responsibility will be leading the development of scalable backend systems and data infrastructure that power AI-driven applications across our platform. You will design, develop, and maintain high-performance APIs and microservices using Python frameworks such as FastAPI and Flask. Additionally, you will build and optimize scalable data pipelines, ETL/ELT processes, and orchestration frameworks, ensuring the utilization of AI development tools like GitHub Copilot, Cursor, or CodeWhisperer to enhance engineering velocity and code quality. In this role, you will architect resilient and modular backend systems integrated with databases like PostgreSQL, MongoDB, and Elasticsearch. Managing workflows and event-driven architectures using tools such as Airflow, Dagster, or Temporal.io will be essential, as you collaborate with cross-functional teams to deliver production-grade systems in cloud environments (AWS/GCP/Azure) with high test coverage, observability, and reliability. To be successful in this position, you must have at least 5 years of hands-on experience in Python backend/API development, a strong background in data engineering, and proficiency in AI-enhanced development environments like Copilot, Cursor, or equivalent tools. Solid experience with Elasticsearch, PostgreSQL, and scalable data solutions, along with familiarity with Docker, CI/CD, and cloud-native deployment practices is crucial. You should also demonstrate the ability to take ownership of features from idea to production. Nice-to-have qualifications include experience with distributed workflow engines like Temporal.io, background in AI/ML systems (PyTorch or TensorFlow), familiarity with LangChain, LLMs, and vector search tools (e.g., FAISS, Pinecone), and exposure to weak supervision, semantic search, or agentic AI workflows. Join us to build infrastructure for cutting-edge AI products and work in a collaborative, high-caliber engineering environment.,

Posted 4 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

You are an experienced Data Modeller with specialized knowledge in designing and implementing data models for modern data platforms, specifically within the healthcare domain. Your expertise includes a deep understanding of data modeling techniques, healthcare data structures, and experience with Databricks Lakehouse architecture. Your role involves translating complex business requirements into efficient and scalable data models that support analytics and reporting needs. You will be responsible for designing and implementing logical and physical data models for Databricks Lakehouse implementations. Collaboration with business stakeholders, data architects, and data engineers is crucial to create data models that facilitate the migration from legacy systems to the Databricks platform while ensuring data integrity, performance, and compliance with healthcare industry standards. Key responsibilities include creating and maintaining data dictionaries, entity relationship diagrams, and model documentation. You will develop dimensional models, data vault models, and other relevant modeling approaches. Additionally, supporting the migration of data models, ensuring alignment with overall data architecture, and implementing data modeling best practices are essential aspects of your role. Your qualifications include extensive experience in data modeling for analytics and reporting systems, strong knowledge of dimensional modeling, data vault, and other methodologies. Proficiency in Databricks platform, Delta Lake architecture, healthcare data modeling, and industry standards is required. You should have experience in migrating data models from legacy systems, strong SQL skills, and understanding of data governance principles. Technical skills that you must possess include expertise in data modeling methodologies, Databricks platform, SQL, data definition languages, data warehousing concepts, ETL/ELT processes, performance tuning, metadata management, data cataloging, cloud platforms, big data technologies, and healthcare industry knowledge. Your knowledge should encompass healthcare data structures, terminology, coding systems, data standards, analytics use cases, regulatory requirements, clinical and operational data modeling challenges, and population health and value-based care data needs. Your educational background should include a Bachelor's degree in Computer Science, Information Systems, or a related field, with an advanced degree being preferred. Professional certifications in data modeling or related areas would be advantageous for this role.,

Posted 4 weeks ago

Apply

3.0 - 8.0 years

0 Lacs

noida, uttar pradesh

On-site

You should have a total of 8+ years of development/design experience, with a minimum of 3 years experience in Big Data technologies on-prem and on cloud. Proficiency in Snowflake and strong SQL programming skills are required. In addition, you should have strong experience with data modeling and schema design, as well as extensive experience using Data warehousing tools like Snowflake, BigQuery, or RedShift. Experience with BI Tools like Tableau, QuickSight, or PowerBI is a must, with at least one tool being a requirement. You should also have strong experience implementing ETL/ELT processes and building data pipelines, including workflow management, job scheduling, and monitoring. A good understanding of Data Governance, Security and Compliance, Data Quality, Metadata Management, Master Data Management, and Data Catalog is essential. Moreover, you should have a strong understanding of cloud services (AWS or Azure), including IAM and log analytics. Excellent interpersonal and teamwork skills are necessary for this role, as well as experience with leading and mentoring other team members. Good knowledge of Agile Scrum and communication skills are also required. As part of the job responsibilities, you will be expected to perform the same tasks as mentioned above. At GlobalLogic, we prioritize a culture of caring. From day one, you will experience an inclusive culture of acceptance and belonging, where you will have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. We are committed to your continuous learning and development. You will learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you will have the chance to work on projects that matter and engage your curiosity and creative problem-solving skills. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. GlobalLogic is a high-trust organization where integrity is key. By joining us, you are placing your trust in a safe, reliable, and ethical global company. GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world's largest and most forward-thinking companies. Since 2000, we have been at the forefront of the digital revolution, helping create innovative and widely used digital products and experiences. Today we continue to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

You are a Sr. Data Engineer with over 7 years of experience, specializing in Data Engineering, Python, and SQL. You will be a part of the Data Engineering team in the Enterprise Data Insights organization, responsible for building data solutions, designing ETL/ELT processes, and managing the data platform to support various stakeholders across the organization. Your role is crucial in driving technology and data-led solutions to foster growth and innovation at scale. Your responsibilities as a Senior Data Engineer include collaborating with cross-functional stakeholders to prioritize requests, identify areas for improvement, and provide recommendations. You will lead the analysis, design, and implementation of data solutions, including constructing data models and ETL processes. Furthermore, you will engage in fostering collaboration with corporate engineering, product teams, and other engineering groups, while also leading and mentoring engineering discussions and advocating for best practices. To excel in this role, you should possess a degree in Computer Science or a related technical field and have a proven track record of over 5 years in Data Engineering. Your expertise should include designing and constructing ETL/ELT processes, managing data solutions within an SLA-driven environment, and developing data products and APIs. Proficiency in SQL/NoSQL databases, particularly Snowflake, Redshift, or MongoDB, along with strong programming skills in Python, is essential. Additionally, experience with columnar OLAP databases, data modeling, and tools like dbt, AirFlow, Fivetran, GitHub, and Tableau reporting will be beneficial. Good communication and interpersonal skills are crucial for effectively collaborating with business stakeholders and translating requirements into actionable insights. An added advantage would be a good understanding of Salesforce & Netsuite systems, experience in SAAS environments, designing and deploying ML models, and familiarity with events and streaming data. Join us in driving data-driven solutions and experiences to shape the future of technology and innovation.,

Posted 1 month ago

Apply

7.0 - 11.0 years

0 Lacs

karnataka

On-site

As a Data Engineering Lead specializing in AI for Data Quality & Analytics with 7 to 10 years of experience, you will play a crucial role in developing and maintaining high-quality data ingestion and validation processes from various upstream systems. Your responsibilities will encompass designing and implementing scalable data quality validation systems, developing AI-driven tools for anomaly detection, and leading the development of data pipelines and validation scripts. Additionally, you will collaborate with stakeholders to proactively address reporting gaps and ensure auditability of decisions derived from enriched data. Your expertise in Python, Alteryx, SQL, and cloud data platforms like Snowflake will be essential for this role, along with a deep understanding of data pipelines, ETL/ELT processes, and data validation best practices. Experience with AI/ML in data quality and familiarity with enterprise systems like Workday, Beeline, and Excel-based reporting are also required. Strong interpersonal and communication skills are necessary to collaborate effectively with executive stakeholders and distributed teams. In this position, you will have the opportunity to lead a small, distributed team, mentoring junior engineers and analysts while optimizing headcount through AI augmentation. Your leadership will enable team members to focus on higher-value initiatives and align system architecture with business needs. Preferred attributes include experience in leading data modernization or AI transformation projects and exposure to dashboard adoption challenges and enterprise change management. If you are a data engineering professional with a passion for data quality and analytics, and possess the requisite skills and experience, we invite you to send your updated resume to swetha.p@zettamine.com. Join our team in Bangalore and contribute to building AI-enhanced quality frameworks, scalable reporting solutions, and automated anomaly detection systems that drive business insights and decision-making processes.,

Posted 1 month ago

Apply
Page 1 of 2
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies