Jobs
Interviews

259 Data Pipelines Jobs - Page 5

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 6.0 years

2 - 6 Lacs

Hyderabad, Telangana, India

On-site

Ensure data is cleansed, mapped, transformed, and optimized for storage and use based on business and technical requirements Design solutions using Microsoft Azure services and related tools Automate tasks and deploy production-standard code with unit testing, continuous integration, and version control Load transformed data into storage and reporting structures such as data warehouses, high-speed indexes, real-time reporting systems, and analytics platforms Build and manage data pipelines to consolidate and unify data Extract data, troubleshoot issues, and maintain the data warehouse

Posted 2 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

You should have a Bachelors degree in computer science or data analytics along with at least 2 years of professional software development experience. You should be comfortable working in a collaborative, agile development environment and have proven experience in using data to drive insights and influence business decisions. Strong expertise in Python for solving data analytics-related challenges is essential. Additionally, hands-on experience with data visualization tools such as Matplotlib, Tableau, PowerBI, or similar is required. A solid understanding of data pipelines, analysis workflows, and process automation is also necessary. Strong problem-solving skills and the ability to work in ambiguous, fast-paced environments are key qualities for this role. Your responsibilities will include designing, developing, and maintaining data analytics tooling to monitor, analyze, and improve system performance and stability. You will use data to extract meaningful insights and translate them into actionable business decisions. Automation of processes and workflows to enhance performance and customer experience will be a part of your daily tasks. Collaboration with cross-functional teams like engineering, product, and operations to identify and address critical issues using data is crucial. Creating intuitive and impactful data visualizations that simplify complex technical problems is also a key responsibility. Continuous evolution of analytics frameworks to support real-time monitoring and predictive capabilities is expected from you. As an IC3 level professional, you will be part of Oracle, a world leader in cloud solutions. Oracle is committed to using tomorrow's technology to tackle today's challenges and has thrived for over 40 years by operating with integrity. The company values inclusivity and empowers all employees to contribute to innovation. Oracle offers global opportunities with a focus on work-life balance and provides competitive benefits, flexible medical, life insurance, and retirement options. Additionally, employees are encouraged to give back to their communities through volunteer programs. Oracle is dedicated to including people with disabilities at all stages of the employment process and provides accessibility assistance or accommodation for disabilities upon request.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

You have a total of 4-6 years of development/design experience with a minimum of 3 years experience in Big Data technologies on-prem and on cloud. You should be proficient in Snowflake and possess strong SQL programming skills. Your role will require strong experience with data modeling and schema design, as well as extensive experience in using Data warehousing tools like Snowflake/BigQuery/RedShift and BI Tools like Tableau/QuickSight/PowerBI (at least one must be a must-have). You must also have experience with orchestration tools like Airflow and transformation tool DBT. Your responsibilities will include implementing ETL/ELT processes and building data pipelines, workflow management, job scheduling, and monitoring. You should have a good understanding of Data Governance, Security and Compliance, Data Quality, Metadata Management, Master Data Management, Data Catalog, as well as cloud services (AWS), including IAM and log analytics. Excellent interpersonal and teamwork skills are essential, along with the experience of leading and mentoring other team members. Good knowledge of Agile Scrum and communication skills are also required. At GlobalLogic, the culture prioritizes caring and inclusivity. Youll join an environment where people come first, fostering meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Continuous learning and development opportunities are provided to help you grow personally and professionally. Meaningful work awaits you at GlobalLogic, where youll have the chance to work on impactful projects and engage your curiosity and problem-solving skills. The organization values balance and flexibility, offering various career areas, roles, and work arrangements to help you achieve a perfect balance between work and life. GlobalLogic is a high-trust organization where integrity is key, ensuring a safe, reliable, and ethical global environment for all employees. Truthfulness, candor, and integrity are fundamental values upheld in everything GlobalLogic does. GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner that collaborates with the world's largest and most forward-thinking companies. Leading the digital revolution since 2000, GlobalLogic helps create innovative digital products and experiences, transforming businesses and redefining industries through intelligent products, platforms, and services.,

Posted 2 weeks ago

Apply

7.0 - 11.0 years

0 Lacs

haryana

On-site

As a Data Engineering Manager at GoKwik, you will have the exciting opportunity to lead a team of data engineers and collaborate closely with product managers, data scientists, business intelligence teams, and SDEs to design and implement data-driven strategies across the organization. You will be responsible for designing the overall data architecture that drives valuable insights for the company. Your key responsibilities will include leading and guiding the data engineering team in developing optimal data strategies according to business needs, identifying and implementing process improvements to enhance data models, architectures, pipelines, and applications, ensuring data optimization processes, managing data governance, security, and analysis, as well as hiring and mentoring top talent within the team. Additionally, you will play a crucial role in managing data delivery through high-performing dashboards, visualizations, and reports, ensuring data quality and security across various product verticals, designing and launching new data models and pipelines, acting as a project manager for data projects, and fostering a data-driven culture within the team. To excel in this role, you should possess a Bachelor's/Master's degree in Computer Science, Mathematics, or relevant field, along with at least 7 years of experience in Data Engineering. Strong project management skills, proficiency in SQL and relational databases, experience in building data pipelines and architectures, familiarity with data transformation processes, and working knowledge of AWS cloud services are essential requirements for this role. We are seeking individuals who are independent, resourceful, analytical, and adept at problem-solving, with the ability to thrive in a fast-paced and dynamic environment. Excellent communication skills, both verbal and written, are crucial for effective collaboration within cross-functional teams. If you are looking to be part of a high-growth startup that values innovation, talent, and customer-centricity, and if you are passionate about tackling challenging problems and making a significant impact within an entrepreneurial setting, we invite you to join our team at GoKwik!,

Posted 2 weeks ago

Apply

14.0 - 18.0 years

0 Lacs

karnataka

On-site

You are hiring for the role of AVP - Databricks with a requirement of minimum 14+ years of experience. The job location can be in Bangalore, Hyderabad, NCR, Kolkata, Mumbai, or Pune. As an AVP - Databricks, your responsibilities will include leading and managing Databricks-based project delivery to ensure that all solutions meet client requirements, best practices, and industry standards. You will serve as a subject matter expert (SME) on Databricks, providing guidance to teams on architecture, implementation, and optimization. Collaboration with architects and engineers to design optimal solutions for data processing, analytics, and machine learning workloads will also be part of your role. Additionally, you will act as the primary point of contact for clients, ensuring alignment between business requirements and technical delivery. We are looking for a candidate with a Bachelor's degree in Computer Science, Engineering, or a related field (Masters or MBA preferred) with relevant years of experience in IT services, specifically in Databricks and cloud-based data engineering. Proven experience in leading end-to-end delivery and solution architecting of data engineering or analytics solutions on Databricks is a plus. Strong expertise in cloud technologies such as AWS, Azure, GCP, data pipelines, and big data tools is desired. Hands-on experience with Databricks, Spark, Delta Lake, MLflow, and related technologies is a requirement. An in-depth understanding of data engineering concepts including ETL, data lakes, data warehousing, and distributed computing will be beneficial for this role.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

Join us as a Data Engineer, VP with the leading MNC in Banking Domain. You will be the voice of our customers, using data to tell their stories and put them at the heart of all decision-making. We will look to you to drive the build of effortless, digital-first customer experiences. If you are ready for a new challenge and want to make a far-reaching impact through your work, this could be the opportunity you are looking for. As a Data Engineer, you will simplify our organization by developing innovative data-driven solutions through data pipelines, modeling, and ETL design. You will inspire to be commercially successful while keeping our customers and the bank's data safe and secure. Your role will involve driving customer value by understanding complex business problems and requirements to correctly apply the most appropriate and reusable tools to gather and build data solutions. You will support our strategic direction by engaging with the data engineering community to deliver opportunities and carrying out complex data engineering tasks to build a scalable data architecture. Your responsibilities will include building advanced automation of data engineering pipelines through the removal of manual stages, embedding new data techniques into our business through role modeling, training, and experiment design oversight, delivering a clear understanding of data platform costs to meet your department's cost-saving and income targets, sourcing new data using the most appropriate tooling for the situation, and developing solutions for streaming data ingestion and transformations in line with our streaming strategy. To thrive in this role, you will need a strong understanding of data usage and dependencies and experience of extracting value and features from large-scale data. You will also bring practical experience of programming languages alongside knowledge of data and software engineering fundamentals. Additionally, you will need experience of ETL technical design, data quality testing, cleansing, and monitoring, data sourcing, and exploration and analysis, data warehousing, and data modeling capabilities, a good understanding of modern code development practices, experience of working in a governed and regulatory environment, and strong communication skills with the ability to proactively engage and manage a wide range of stakeholders.,

Posted 2 weeks ago

Apply

3.0 - 8.0 years

0 Lacs

hyderabad, telangana

On-site

As an Artificial Intelligence Specialist at Gen AI, you will be responsible for driving customer conversations, understanding customer requirements, creating Gen AI solution architectures, and developing customer proposals and RFP responses. You will guide solution engineers in creating Gen AI POCs and solutions for various industry verticals. Your role will involve staying updated on the latest technology developments, industry best practices, and incorporating them into Gen AI applications. Additionally, you will design and deploy Proof of Concepts (POCs) and Points of View (POVs) across different industry verticals to showcase the potential of Generative AI applications. To qualify for this role, you should have at least 8 years of experience in software development, with a minimum of 3 years of experience in Generative AI solution development. A bachelor's degree or higher in Computer Science, Software Engineering, or related fields is required. You should be adept at critical thinking, logical reasoning, and have a strong ability to learn new industry domains quickly. Being a team player who can deliver under pressure is essential. Furthermore, you should have experience with cloud technologies such as Azure, AWS, or GCP, as well as a good understanding of NVIDIA or similar technologies. A solid appreciation of AI/ML concepts and sound design principles is necessary for this role. In terms of required skills, you should be extremely dynamic and enthusiastic about technology. Development experience with languages like C++, Java, JavaScript, HTML, C#, Python, or node.js is preferred. You should be able to adapt quickly to new challenges and evolving technology stacks. Excellent written and verbal communication skills in English are essential, along with strong analytical and critical thinking abilities. A customer-focused attitude, initiative-taking, self-driven nature, and the ability to learn quickly are also important qualities for this role. Knowledge of Python, ML Algorithms, Statistics, source code maintenance, versioning tools, Object-Oriented Programming Concepts, debugging, and analytical skills is required. Preferred skills for this position include at least 5 years of experience in ML development and MLOps. Strong programming skills in Python, knowledge of ML, Data, and API libraries, and expertise in creating end-to-end data pipelines are advantageous. Experience with ML models, ModelOps/MLOps, AutoML, AI Ethics, Trust, Explainable AI, and popular ML frameworks like SparkML, TensorFlow, scikit-learn, XGBoost, H2O, etc., is beneficial. Familiarity with working in cloud environments (AWS, Azure, GCP) or containerized environments (Mesos, Kubernetes), interest in understanding functional and industry business challenges, and knowledge of IT industry and GenAI use cases in insurance processes are preferred. Expertise in Big Data and Data Modeling is also desirable for this role.,

Posted 2 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

ahmedabad, gujarat

On-site

You are a Senior Data Engineer with expertise in constructing scalable data pipelines utilizing Microsoft Fabric. Your primary responsibilities will involve developing and managing data pipelines through Microsoft Fabric Data Factory and OneLake. You will be tasked with designing and creating ingestion and transformation pipelines for both structured and unstructured data. It will be your responsibility to establish frameworks for metadata tagging, version control, and batch tracking to ensure the security, quality, and compliance of data pipelines. Additionally, you will play a crucial role in contributing to CI/CD integration, observability, and documentation. Collaboration with data architects and analysts will be essential to align with business requirements effectively. To qualify for this role, you should possess at least 6 years of experience in data engineering, with a minimum of 2 years of hands-on experience working on Microsoft Fabric or Azure Data services. Proficiency in tools like Azure Data Factory, Fabric, Databricks, or Synapse is required. Strong SQL and data processing skills (such as PySpark and Python) are essential. Previous experience with data cataloging, lineage, and governance frameworks will be beneficial for this position.,

Posted 2 weeks ago

Apply

7.0 - 11.0 years

0 Lacs

haryana

On-site

As a Data Engineer at Srijan, a Material company, you will play a crucial role in designing and developing scalable data pipelines within Microsoft Fabric. Your primary responsibilities will include optimizing data pipelines, collaborating with cross-functional teams, and ensuring documentation and knowledge sharing. You will work closely with the Data Architecture team to implement scalable and governed data architectures within OneLake and Microsoft Fabric's unified compute and storage platform. Your expertise in Microsoft Fabric will be utilized to build robust pipelines using both batch and real-time processing techniques, integrating with Azure Data Factory for seamless data movement. Continuous monitoring, enhancement, and optimization of Fabric pipelines, notebooks, and lakehouse artifacts will be essential to ensure performance, reliability, and cost-efficiency. You will collaborate with analysts, BI developers, and data scientists to deliver high-quality datasets and enable self-service analytics via Power BI datasets connected to Fabric Lakehouses. Maintaining up-to-date documentation for all data pipelines, semantic models, and data products, as well as sharing knowledge of Fabric best practices with junior team members, will be an integral part of your role. Your expertise in SQL, data modeling, and cloud architecture design will be crucial in designing modern data platforms using Microsoft Fabric, OneLake, and Synapse. To excel in this role, you should have at least 7+ years of experience in the Azure ecosystem, with relevant experience in Microsoft Fabric, Data Engineering, and Data Pipelines components. Proficiency in Azure Data Factory, advanced data engineering skills, and strong collaboration and communication abilities are also required. Additionally, knowledge of Azure Databricks, Power BI integration, DevOps practices, and familiarity with OneLake, Delta Lake, and Lakehouse architecture will be advantageous. Join our awesome tribe at Srijan and leverage your expertise in Microsoft Fabric to build scalable solutions integrated with Business Intelligence layers, Azure Synapse, and other Microsoft data services.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

As an Analyst/Consultant in the Strategy & Consulting Global Network at Accenture's Marketing analytics practice in Gurgaon, you will play a crucial role in helping clients grow their business through data-driven insights and analytics. You will be part of a global team of over 20,000 skilled professionals who excel in statistical tools, methods, and applications, working towards providing analytically informed insights at scale. Your responsibilities will include defining data requirements for Data Driven Merchandizing capability, cleaning, aggregating, analyzing, and interpreting data, as well as conducting data quality analysis. With 3+ years of experience in Data Driven Merchandizing, specifically in Pricing/Promotions/Assortment Optimization across retail clients, you will utilize your knowledge of price/discount elasticity estimation, non-linear optimization techniques, statistical timeseries models, store clustering algorithms, and descriptive analytics to support merch AI capability. Moreover, you will be expected to have hands-on experience in state space modeling, mixed effect regression, and developing AI/ML models in Azure ML tech stack. Your role will also involve managing data pipelines, data within different layers of Snowflake environment, and implementing scalable machine learning architectures. Proficiency in cloud platforms for deploying and maintaining machine learning models in production will be essential. Collaboration with the team and consultants/managers is a key part of your role, along with creating insights presentations and client-ready decks. You should possess strong communication skills and the ability to mentor and guide junior resources. Logical thinking, analytical skills, and task management knowledge will be necessary for planning tasks, setting priorities, tracking progress, and reporting effectively. At Accenture, you can expect continuous investment in your learning and growth, with opportunities to work with Data Driven Merchandizing experts and build your tech stack and certifications. You will gain a deep understanding of sound analytical decision-making and execute projects in the context of business performance improvement initiatives. If you are looking to leverage your expertise in analytics and make a significant impact on client outcomes, this role offers a dynamic and rewarding opportunity.,

Posted 2 weeks ago

Apply

7.0 - 11.0 years

0 Lacs

hyderabad, telangana

On-site

As a Senior Backend Engineer with 7 to 10 years of experience, you will be responsible for developing backend systems and APIs using Python programming. You should have a strong understanding of Cloud platforms such as AWS or GCP and be proficient in CI/CD, Docker, and Linux. Your expertise in microservices architecture will be crucial for designing scalable and efficient systems. The ideal candidate will have hands-on experience with Cloud platforms like GCP or AWS and a good command of Python programming. Additionally, familiarity with data pipelines tools such as Airflow or Netflix Conductor would be advantageous. Experience working with Apache Spark/Beam and Kafka will be considered a plus. Key Skills: - Python programming - Backend/API development - Cloud experience with AWS or GCP - CI/CD, Docker, Linux - Microservices architecture Nice to have: - Experience with data pipelines (Airflow, Netflix Conductor) - Knowledge of Apache Spark/Beam, Kafka In this role, you will play a vital part in building and maintaining robust backend systems that power our applications. Your contributions will directly impact the scalability and performance of our services, making this an exciting opportunity for someone passionate about backend development and cloud technologies.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

At PwC, our team in managed services focuses on providing a variety of outsourced solutions and supporting clients across multiple functions. We help organizations streamline their operations, reduce costs, and enhance efficiency by managing key processes and functions on their behalf. Our team is skilled in project management, technology, and process optimization, ensuring the delivery of high-quality services to our clients. Those in managed service management and strategy at PwC are responsible for transitioning and running services, managing delivery teams, programs, commercials, performance, and delivery risk. Your role will involve continuous improvement and optimization of managed services processes, tools, and services. As a member of our team, you will build meaningful client relationships and learn how to manage and inspire others. You will navigate complex situations, develop your personal brand, deepen your technical expertise, and leverage your strengths. Anticipating the needs of your teams and clients, you will deliver quality results. Embracing ambiguity, you will be comfortable when the path forward is unclear, asking questions and using such moments as opportunities for growth. Required skills, knowledge, and experiences for this role include but are not limited to: - Responding effectively to diverse perspectives, needs, and feelings of others - Using a broad range of tools, methodologies, and techniques to generate new ideas and solve problems - Applying critical thinking to break down complex concepts - Understanding the broader objectives of your project or role and how your work aligns with the overall strategy - Developing a deeper understanding of the business context and its changing dynamics - Using reflection to enhance self-awareness, strengths, and development areas - Interpreting data to derive insights and recommendations - Upholding and reinforcing professional and technical standards, along with the Firm's code of conduct and independence requirements As a Senior Associate, you will work collaboratively with a team of problem solvers, addressing complex business issues from strategy to execution through Data, Analytics & Insights Skills. Your responsibilities at this level include: - Using feedback and reflection to enhance self-awareness, personal strengths, and address development areas - Demonstrating critical thinking and the ability to structure unstructured problems - Reviewing deliverables for quality, accuracy, and relevance - Adhering to SLAs, incident management, change management, and problem management - Leveraging tools effectively in different situations and explaining the rationale behind the choices - Seeking opportunities for exposure to diverse situations, environments, and perspectives - Communicating straightforwardly and structurally to influence and connect with others - Demonstrating leadership by engaging directly with clients and leading engagements - Collaborating in a team environment with client interactions, workstream management, and cross-team cooperation - Contributing to cross-competency work and Center of Excellence activities - Managing escalations and risks effectively Position Requirements: - Primary Skill: Tableau, Visualization, Excel - Secondary Skill: Power BI, Cognos, Qlik, SQL, Python, Advanced Excel, Excel Macro BI Engineer Role: - Minimum 5 years hands-on experience in building advanced Data Analytics - Minimum 5 years hands-on experience in delivering Managed Data and Analytics programs - Extensive experience in developing scalable, repeatable, and secure data structures and pipelines - Proficiency in industry tools like Python, SQL, Spark for Data analytics - Experience in building Data Governance solutions using leading tools - Knowledge of Data consumption patterns and BI tools like Tableau, Qlik Sense, Power BI - Strong communication, problem-solving, quantitative, and analytical abilities Certifications in Tableau and other BI tools are advantageous, along with certifications in any cloud platform. In our Managed Services - Data, Analytics & Insights team at PwC, we focus on collaborating with clients to leverage technology and human expertise, delivering simple yet powerful solutions. Our goal is to enable clients to focus on their core business while trusting us as their IT partner. We are driven by the passion to enhance our clients" capabilities every day. Within our Managed Services platform, we offer integrated services grounded in industry experience and powered by top talent. Our team of global professionals, combined with cutting-edge technology, ensures effective outcomes that add value to our clients" enterprises. Through a consultative approach, we enable transformational journeys that drive sustained client outcomes, allowing clients to focus on accelerating their priorities and optimizing their operations. As a member of our Data, Analytics & Insights Managed Service team, you will contribute to critical Application Evolution Service offerings, help desk support, enhancement and optimization projects, and strategic roadmap development. Your role will involve technical expertise and relationship management to support customer engagements effectively.,

Posted 2 weeks ago

Apply

12.0 - 16.0 years

0 Lacs

karnataka

On-site

As a Senior Data Modeller, you will be responsible for leading the design and development of conceptual, logical, and physical data models for enterprise and application-level databases. Your expertise in data modeling, data warehousing, and data governance, particularly in cloud environments, Databricks, and Unity Catalog, will be crucial for the role. You should have a deep understanding of business processes related to master data management in a B2B environment and experience with data governance and data quality concepts. Your key responsibilities will include designing and developing data models, translating business requirements into structured data models, defining and maintaining data standards, collaborating with cross-functional teams to implement models, analyzing existing data systems for optimization, creating entity relationship diagrams and data flow diagrams, supporting data governance initiatives, and ensuring compliance with organizational data policies and security requirements. To be successful in this role, you should have at least 12 years of experience in data modeling, data warehousing, and data governance. Strong familiarity with Databricks, Unity Catalog, and cloud environments (preferably Azure) is essential. Additionally, you should possess a background in data normalization, denormalization, dimensional modeling, and schema design, along with hands-on experience with data modeling tools like ERwin. Experience in Agile or Scrum environments, proficiency in integration, databases, data warehouses, and data processing, as well as a track record of successfully selling data and analytics software to enterprise customers are key requirements. Your technical expertise should cover Big Data, streaming platforms, Databricks, Snowflake, Redshift, Spark, Kafka, SQL Server, PostgreSQL, and modern BI tools. Your ability to design and scale data pipelines and architectures in complex environments, along with excellent soft skills including leadership, client communication, and stakeholder management will be valuable assets in this role.,

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

At EY, you will be part of a globally connected powerhouse of diverse teams that will shape your future with confidence. Your career will have the opportunity to grow in innovative ways as you contribute to cutting-edge projects on the Azure platform. As an Azure ML and Python Developer, you will play a crucial role in designing and implementing data pre-processing, feature engineering, and model training pipelines to ensure model performance and reliability in production environments. Your responsibilities will include developing and deploying machine learning models on the Azure cloud platform using Python programming language and Azure ML services. You will collaborate with data scientists and business stakeholders to understand requirements and translate them into technical solutions. Additionally, you will design and implement scalable data pipelines for model training and inference, maintain APIs on the Azure cloud platform, and monitor and optimize model performance. To excel in this role, you should have a bachelor's degree in computer science, data science, or a related field, along with 6-8 years of experience in machine learning, data engineering, and cloud computing, with a focus on Azure services. Proficiency in Python programming language, Azure ML services, and cloud infrastructure is essential. You should also possess strong analytical and problem-solving skills, excellent communication abilities, and a proactive mindset to thrive in a fast-paced environment. As part of the EY team, you will have the opportunity to work on inspiring and meaningful projects, receive support and coaching from engaging colleagues, and develop new skills to progress your career. You will be encouraged to take ownership of your personal development with an individual progression plan and contribute to a collaborative and interdisciplinary environment that values high quality and knowledge exchange. Join EY to be part of building a better working world where you can shape the future with confidence, develop answers to pressing issues, and contribute to creating new value for clients, people, society, and the planet using data, AI, and advanced technology.,

Posted 2 weeks ago

Apply

5.0 - 10.0 years

15 - 25 Lacs

Pune, Bengaluru, Mumbai (All Areas)

Work from Office

Role & responsibilities Develop, test, and deploy robust Dashboards and reports in Power BI using SAP HANA and Snowflake Datasets Basic Qualifications Excellent verbal and written communication skills 5+ years of experience working with Power BI with SAP HANA and Snowflake Datasets 5+ hands-on experience in developing moderate to complex ETL data pipelines is a plus 5+ years of hands-on experience with ability to resolve complex SQL query performance issues. 5+ years of ETL Python development experience; experience parallelizing pipelines a plus Demonstrated ability to troubleshoot complex query, pipeline, and data quality issues Call : - 9584022831 Email: - Mayank@axiomsoftwaresolutions.com

Posted 2 weeks ago

Apply

4.0 - 5.0 years

3 - 12 Lacs

Bengaluru, Karnataka, India

On-site

Job Summary: The Snowflake Data Engineer is responsible for building and managing data pipelines and data warehousing solutions using the Snowflake platform. This role involves working with large datasets, ensuring data quality, and enabling scalable data integration and analytics across the organization. Key Responsibilities: Design, develop, and maintain scalable data pipelines using Snowflake Build and optimize data models, schemas, and data warehouses for performance and efficiency Perform data extraction, transformation, and loading (ETL/ELT) from various sources Integrate Snowflake with external tools, data sources, and cloud platforms Collaborate with data analysts, architects, and business teams to define data requirements Ensure data quality, integrity, and security across all data processes Monitor data pipelines and troubleshoot performance or data issues Automate workflows and optimize queries for cost and speed Maintain documentation for data structures, processes, and governance policies Required Skills and Qualifications: Bachelor's degree in Computer Science, Data Engineering, or related field 3+ years of experience in data engineering with at least 1+ year on Snowflake Proficient in SQL and experience with Snowflake architecture and features Hands-on experience with ETL/ELT tools like Informatica, Matillion, dbt, or similar Experience in working with cloud platforms like AWS, Azure, or GCP Strong understanding of data modeling, data warehousing, and performance tuning Good communication, problem-solving, and documentation skills Preferred Qualifications: Snowflake SnowPro Certification Experience with scripting languages like Python for data processing Familiarity with DevOps tools for CI/CD pipelines in data engineering Knowledge of data security, governance, and compliance standards Experience working in Agile or Scrum environments

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

At PwC, the focus in data and analytics is on leveraging data to drive insights and make informed business decisions. Utilizing advanced analytics techniques to help clients optimize their operations and achieve strategic goals is key. In data analysis at PwC, the emphasis is on utilizing advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. Skills in data manipulation, visualization, and statistical modeling play a crucial role in supporting clients in solving complex business problems. Candidates with 4+ years of hands-on experience are sought for the position of Senior Associate in supply chain analytics. Successful candidates should possess proven expertise in supply chain analytics across domains such as demand forecasting, inventory optimization, logistics, segmentation, and network design. Additionally, hands-on experience working on optimization methods like linear programming, mixed integer programming, and scheduling optimization is required. Proficiency in forecasting techniques and machine learning techniques, along with a strong command of statistical modeling, testing, and inference, is essential. Familiarity with GCP tools like BigQuery, Vertex AI, Dataflow, and Looker is also necessary. Required skills include building data pipelines and models for forecasting, optimization, and scenario planning, strong SQL and Python programming skills, experience deploying models in a GCP environment, and knowledge of orchestration tools like Cloud Composer (Airflow). Nice-to-have skills consist of familiarity with MLOps, containerization (Docker, Kubernetes), and orchestration tools, as well as strong communication and stakeholder engagement skills at the executive level. The roles and responsibilities of the Senior Associate involve assisting analytics projects within the supply chain domain, driving design, development, and delivery of data science solutions. They are expected to interact with and advise consultants/clients as subject matter experts, conduct analysis using advanced analytics tools, and implement quality control measures for deliverable integrity. Validating analysis outcomes, making presentations, and contributing to knowledge and firm building activities are also part of the role. The ideal candidate should hold a degree in BE / B.Tech / MCA / M.Sc / M.E / M.Tech / Masters Degree / MBA from a reputed institute.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Data Engineer, your primary responsibility will be to design and develop robust ETL pipelines using Python, PySpark, and various Google Cloud Platform (GCP) services. You will be tasked with building and optimizing data models and queries in BigQuery to support analytics and reporting needs. Additionally, you will play a crucial role in ingesting, transforming, and loading structured and semi-structured data from diverse sources. Collaboration with data analysts, scientists, and business teams is essential to grasp and address data requirements effectively. Ensuring data quality, integrity, and security across cloud-based data platforms will be a key part of your role. You will also be responsible for monitoring and troubleshooting data workflows and performance issues. Automation of data validation and transformation processes using scripting and orchestration tools will be a significant aspect of your day-to-day tasks. Your hands-on experience with Google Cloud Platform (GCP), particularly BigQuery, will be crucial. Proficiency in Python and/or PySpark programming, along with experience in designing and implementing ETL workflows and data pipelines, is required. A strong command of SQL and data modeling for analytics is essential. Familiarity with GCP services like Cloud Storage, Dataflow, Pub/Sub, and Composer will be beneficial. An understanding of data governance, security, and compliance in cloud environments is also expected. Experience with version control using Git and agile development practices will be advantageous for this role.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

jaipur, rajasthan

On-site

OneDose is revolutionizing medication management through the utilization of advanced AI and data-driven solutions. The primary objective is to enhance the intelligence, safety, and accessibility of every dose on a large scale. Patients often face challenges such as cost constraints, availability issues, or allergies, resulting in missed medications. Addressing this multifaceted clinical and supply chain dilemma necessitates seamless data integration, real-time intelligence, and precise recommendations. The responsibilities include integrating formulary data, supplier inventories, salt compositions, and clinical guidelines into a unified ontology. Moreover, developing a clinical decision support system that offers automated suggestions and deploying real-time recommendation pipelines using Foundry's Code Repositories and Contour (ML orchestration layer). The role of Palantir Foundry Developer is a full-time, on-site position based in Jaipur. The key responsibilities involve constructing and managing data integration pipelines, creating analytical models, and enhancing data workflows using Palantir Foundry. Daily tasks encompass collaborating with diverse teams, troubleshooting data-related issues, and ensuring data quality and adherence to industry standards. The ideal candidate should possess profound expertise in Palantir Foundry ranging from data integration to operational app deployment. Demonstrated experience in constructing data ontologies, data pipelines (PySpark, Python), and production-grade ML workflows is essential. A solid grasp of clinical or healthcare data (medication data, EHRs, or pharmacy systems) is highly advantageous. Additionally, the ability to design scalable, secure, and compliant data solutions for highly regulated environments is crucial. A strong passion for addressing impactful healthcare challenges through advanced technology is desired. A Bachelor's degree in Computer Science, Data Science, or a related field is required. Joining OneDose offers the opportunity to make a significant impact by enhancing medication accessibility and patient outcomes in India and globally. You will work with cutting-edge technologies like Palantir Foundry, advanced AI models, and scalable cloud-native architectures. The work environment promotes ownership, growth, innovation, and leadership, enabling you to contribute to shaping the future of healthcare.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

You are a talented and passionate RAG (Retrieval-Augmented Generation) Engineer with strong Python development skills, joining our AI/ML team in Bengaluru, India. Your role involves working on cutting-edge NLP solutions that integrate information retrieval techniques with large language models (LLMs). The ideal candidate will have experience with vector databases, LLM frameworks, and Python-based backend development. In this position, your responsibilities will include designing and implementing RAG pipelines that combine retrieval mechanisms with language models, developing efficient and scalable Python code for LLM-based applications, integrating with vector databases like Pinecone, FAISS, Weaviate, and more. You will fine-tune and evaluate the performance of LLMs using various prompt engineering and retrieval strategies, collaborating with ML engineers, data scientists, and product teams to deliver high-quality AI-powered features. Additionally, you will optimize system performance and ensure the reliability of RAG-based applications. To excel in this role, you must possess a strong proficiency in Python and experience in building backend services/APIs, along with a solid understanding of NLP concepts, information retrieval, and LLMs. Hands-on experience with at least one vector database, familiarity with Hugging Face Transformers, LangChain, LLM APIs, and experience in prompt engineering, document chunking, and embedding techniques are essential. Good knowledge of working with REST APIs, JSON, and data pipelines is required. Preferred qualifications for this position include a Bachelors or Masters degree in Computer Science, Data Science, or a related field, experience with cloud platforms like AWS, GCP, or Azure, exposure to tools like Docker, FastAPI, or Flask, and an understanding of data security and privacy in AI applications.,

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

As an Azure ML and Python Dev- Senior 1/2 at EY GDS Consulting digital engineering, you will be responsible for designing and implementing data pre-processing, feature engineering, and model training pipelines. Your role will involve collaborating closely with data scientists to ensure model performance and reliability in production environments. Proficiency in Azure ML services, Python programming, and a strong background in machine learning are essential for this position. You will have the opportunity to lead and contribute to cutting-edge projects on the Azure platform. Your key responsibilities will include developing and deploying machine learning models on the Azure cloud platform, designing efficient data pipelines, collaborating with stakeholders, implementing best practices for ML development, and maintaining APIs for model deployment and integration with applications. Additionally, you will monitor and optimize model performance, participate in code reviews and troubleshooting, stay updated with industry trends, mentor junior team members, and contribute to innovation initiatives. To qualify for this role, you must have a bachelor's or master's degree in computer science, data science, or a related field, along with 6-8 years of experience in machine learning, data engineering, and cloud computing. Strong communication skills, a proven track record of successful project delivery, and relevant certifications are highly desirable. Experience with other cloud platforms and programming languages is a plus. Ideally, you will possess analytical ability to manage multiple projects simultaneously, familiarity with advanced ML techniques and frameworks, knowledge of cloud security principles, and experience with Big Data technologies. Working at EY offers you the opportunity to work on inspiring projects, receive support and coaching from engaging colleagues, develop new skills, progress your career, and enjoy freedom and flexibility in your role. EY is dedicated to building a better working world by creating new value for clients, people, society, and the planet. With a focus on data, AI, and advanced technology, EY teams help clients shape the future with confidence and address pressing issues. By working across a full spectrum of services in assurance, consulting, tax, strategy, and transactions, EY teams provide services globally and emphasize high quality, knowledge exchange, and interdisciplinary collaboration.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

You will be responsible for building systems and APIs to collect, curate, and analyze data generated by biomedical dogs, devices, and patient data. Your immediate requirements will include developing APIs and backends to handle Electronic Health Record (EHR) data, time-series sensor streams, and sensor/hardware integrations via REST APIs. Additionally, you will work on data pipelines and analytics for physiological, behavioral, and neural signals, as well as machine learning and statistical models for biomedical and detection dog research. You will also be involved in web and embedded integrations connecting software to real-world devices. To excel in this role, you should have familiarity with domains such as signal processing, basic statistics, stream processing, online algorithms, databases (especially time series databases like victoriametrics, SQL including postgres, sqlite, duckdb), computer vision, and machine learning. Proficiency in Python, C++, or Rust is essential, as the stack primarily consists of Python with some modules in Rust/C++ where necessary. Firmware development is done in C/C++ (or Rust), and if you choose to work with C++/Rust, you may need to create a Python API using pybind11/PyO3. Your responsibilities will involve developing data pipelines for real-time and batch processing, as well as building robust APIs and backends for devices, research tools, and data systems. You will handle data transformations, storage, and querying for structured and time-series datasets, evaluate and enhance ML models and analytics, and collaborate with hardware and research teams to derive insights from messy real-world data. The focus will be on ensuring data integrity and correctness rather than brute-force scaling. If you enjoy creating reliable software and working with complex real-world data, we look forward to discussing this opportunity with you. Key Skills: backend development, computer vision, data transformations, databases, analytics, data querying, C, Python, C++, signal processing, data storage, statistical models, API development, Rust, data pipelines, firmware development, stream processing, machine learning,

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

pune, maharashtra

On-site

As an Enterprise Architect, Data Integration and BI at Myers-Holum, you will be responsible for leading the strategic design, architecture, and implementation of enterprise data solutions to ensure alignment with our clients" long-term business goals. Your role will involve developing and promoting the architectural vision for data integration, Business Intelligence (BI), and analytics solutions across various business functions and applications. You will design and build scalable, high-performance data warehouses and BI solutions for clients using cutting-edge cloud-based and on-premise technologies. Additionally, you will lead cross-functional teams in developing data governance frameworks, data models, and integration architectures to support seamless data flow across disparate systems. You will translate high-level business requirements into technical specifications, ensuring alignment with broader organizational IT strategies and compliance standards. Your responsibilities will also include architecting end-to-end data pipelines, data integration frameworks, and data governance models to enable the seamless flow of structured and unstructured data from multiple sources. Furthermore, you will provide thought leadership in evaluating and recommending emerging technologies, tools, and best practices for data management, integration, and business intelligence. You will oversee the deployment and adoption of key enterprise data initiatives, engage with C-suite executives and senior stakeholders to communicate architectural solutions, and lead and mentor technical teams to foster a culture of continuous learning and innovation in the areas of data management, BI, and integration. Your role will involve conducting architectural reviews, providing guidance on best practices for data security, compliance, and performance optimization, as well as leading technical workshops, training sessions, and collaborative sessions with clients to ensure successful adoption of data solutions. You will contribute to the development of internal frameworks, methodologies, and standards for data architecture, integration, and BI, while staying up to date with industry trends and emerging technologies to continuously evolve the enterprise data architecture. To qualify for this position, you should have 10+ years of relevant professional experience in data management, business intelligence, and integration architecture, along with 6+ years of experience in designing and implementing enterprise data architectures. You should possess expertise in cloud-based data architectures, proficiency in data integration tools, experience with relational databases, and a solid understanding of BI platforms. Additionally, strong business analysis, stakeholder management, written and verbal communication skills, and experience in leading digital transformation initiatives are required. At Myers-Holum, you will have the opportunity to collaborate with other curious minds, shape your future, positively influence change for customers, and discover your true potential. As part of our team, you will be encouraged to remain curious, humble, and resilient, while contributing to our mission and operating principles. With over 40 years of experience, a strong internal framework, cutting-edge technology partners, and a focus on employee well-being and growth, Myers-Holum offers a rewarding and supportive environment for professional development and career advancement.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

As a Data Engineer at our company, you will be responsible for handling ETL processes using PySpark, SQL, Microsoft Fabric, and other relevant technologies. You will collaborate with clients and stakeholders to comprehend data requirements and devise efficient data models and solutions. Additionally, optimizing and tuning existing data pipelines for enhanced performance and scalability will be a crucial part of your role. Ensuring data quality and integrity throughout the data pipeline and documenting technical designs, processes, and procedures will also be part of your responsibilities. It is essential to stay updated on emerging technologies and best practices in data engineering and contribute to building CICD pipelines using Github. To qualify for this role, you should hold a Bachelor's degree in computer science, engineering, or a related field, along with a minimum of 3 years of experience in data engineering or a similar role. A strong understanding of ETL concepts and best practices is required, as well as proficiency in Azure Synapse, Microsoft Fabric, and other data processing technologies. Experience with cloud-based data platforms such as Azure or AWS, knowledge of data warehousing concepts and methodologies, and proficiency in Python, PySpark, and SQL programming languages for data manipulation and scripting are also essential. Desirable qualifications include experience with data lake concepts, familiarity with data visualization tools like Power BI or Tableau, and certifications in relevant technologies such as Microsoft Certified: Azure Data Engineer Associate. Our company offers various benefits including group medical insurance, cab facility, meals/snacks, and a continuous learning program. Stratacent is a Global IT Consulting and Services firm with headquarters in Jersey City, NJ, and global delivery centers in Pune and Gurugram, along with offices in the USA, London, Canada, and South Africa. Specializing in Financial Services, Insurance, Healthcare, and Life Sciences, we assist our customers in their transformation journey by providing services in Information Security, Cloud Services, Data and AI, Automation, Application Development, and IT Operations. For more information, you can visit our website at http://stratacent.com.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Data Engineer, you will be responsible for designing and developing robust ETL pipelines using Python, PySpark, and Google Cloud Platform (GCP) services. Your role will involve building and optimizing data models and queries in BigQuery for analytics and reporting purposes. You will also be responsible for ingesting, transforming, and loading structured and semi-structured data from various sources. Collaboration with data analysts, scientists, and business teams to comprehend data requirements will be a key aspect of your job. Ensuring data quality, integrity, and security across cloud-based data platforms is crucial. Monitoring and troubleshooting data workflows and performance issues will also be part of your responsibilities. Automation of data validation and transformation processes using scripting and orchestration tools will be an essential aspect of your role. You are required to have hands-on experience with Google Cloud Platform (GCP), especially BigQuery. Strong programming skills in Python and/or PySpark are necessary for this position. Your experience in designing and implementing ETL workflows and data pipelines will be valuable. Proficiency in SQL and data modeling for analytics is required. Familiarity with GCP services such as Cloud Storage, Dataflow, Pub/Sub, and Composer is preferred. Understanding data governance, security, and compliance in cloud environments is essential. Experience with version control tools like Git and agile development practices will be beneficial for this role. If you are looking for a challenging opportunity to work on cutting-edge data engineering projects, this position is ideal for you.,

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies