Home
Jobs

13457 Etl Jobs - Page 8

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

At o9 Solutions, our mission is clear: be the Most Valuable Platform (MVP) for enterprises. With our AI-driven platform — the o9 Digital Brain — we integrate global enterprises’ siloed planning capabilities, helping them capture millions and, in some cases, billions of dollars in value leakage. But our impact doesn’t stop there. Businesses that plan better and faster also reduce waste, which drives better outcomes for the planet, too. We're on the lookout for the brightest, most committed individuals to join us on our mission. Along the journey, we’ll provide you with a nurturing environment where you can be part of something truly extraordinary and make a real difference for companies and the plane t What you’ll do for us: Apply a variety of machine learning techniques (clustering, regression, ensemble learning, neural nets, time series, optimizations etc.) to their real-world advantages/drawbacks Develop and/or optimize models for demand sensing/forecasting, optimization (Heuristic, LP, GA etc), Anomaly detection, Simulation and stochastic models, Market Intelligence etc. Use latest advancements in AI/ML to solve business problems Analyze problems by synthesizing complex information, evaluating alternate methods, and articulating the result with the relevant assumptions/reasons Application of common business metrics (Forecast Accuracy, Bias, MAPE) and the ability to generate new ones as needed. Develop or optimize modules to call web services for real time integration with externa systems Work collaboratively with Clients, Project Management, Solution Architects, Consultants and Data Engineers to ensure successful delivery of o9 projects What you’ll have: Experience: 4+ Years of experience in time series forecasting in scale using heuristic-based hierarchical best-fit models using algorithms like exponential smoothing, ARIMA, prophet and custom parameter tuning. Experience in applied analytical methods in the field of Supply chain and planning, like demand planning, supply planning, market intelligence, optimal assortments/pricing/inventory etc. Should be from a statistical background. Education: Bachelors Degree in Computer Science, Mathematics, Statistics, Economics, Engineering or related field Languages: Python and/or R for Data Science Skills: Deep Knowledge of statistical and machine learning algorithms, building scalable ML frameworks, identifying and collecting relevant input data, feature engineering, tuning, and testing. Characteristics: Independent thinkers Strong presentation and communications skills We really value team spirit: Transparency and frequent communication is key. At o9, this is not limited by hierarchy, distance, or function. Nice to have: Experience with SQL, databases and ETL tools or similar is optional but preferred Exposure to distributed data/computing tools: Map/Reduce, Hadoop, Hive, Spark, Gurobi, or related Big Data technologies Experience with Deep Learning frameworks such as Keras, Tensorflow or PyTorch is preferable Experience in implementing planning applications will be a plus Understanding of Supply Chain Concepts will be preferable Masters Degree in Computer Science, Applied Mathematics, Statistics, Engineering, Business Analytics, Operations, or related field What we’ll do for you Competitive salary with stock options to eligible candidates Flat organization: With a very strong entrepreneurial culture (and no corporate politics) Great people and unlimited fun at work Possibility to make a difference in a scale-up environment. Opportunity to travel onsite in specific phases depending on project requirements. Support network: Work with a team you can learn from everyday. Diversity: We pride ourselves on our international working environment. Work-Life Balance: https://youtu.be/IHSZeUPATBA?feature=shared Feel part of A team: https://youtu.be/QbjtgaCyhes?feature=shared How the process works Apply by clicking the button below You’ll be contacted by our recruiter, who’ll fill you in on all things at o9, give you some background about the role and get to know you. They’ll contact you either via video call or phone call - whatever you prefer. During the interview phase, you will meet with technical panels for 60 minutes. The recruiter will contact you after the interview to let you know if we’d like to progress your application. We will have 2 rounds of Technical discussion followed by a Hiring Manager discussion. Our recruiter will let you know if you’re the successful candidate. Good luck! More about us … With the latest increase in our valuation from $2.7B to $3.7B despite challenging global macroeconomic conditions, o9 Solutions is one of the fastest-growing technology companies in the world today. Our mission is to digitally transform planning and decision-making for the enterprise and the planet. Our culture is high-energy and drives us to aim 10x in everything we do. Our platform, the o9 Digital Brain, is the premier AI-powered, cloud-native platform driving the digital transformations of major global enterprises including Google, Walmart, ABInBev, Starbucks and many others. Our headquarters are located in Dallas, with offices in Amsterdam, Paris, London, Barcelona, Madrid, Sao Paolo, Bengaluru, Tokyo, Seoul, Milan, Stockholm, Sydney, Shanghai, Singapore an d Munich. o9 is an equal opportunity employer and seeks applicants of diverse backgrounds and hires without regard to race, colour, gender, religion, national origin, citizenship, age, sexual orientation or any other characteristic protected by law Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Proficiency in Talend ETL development and integration with Snowflake. Hands-on experience with IBM Data Replicator and Qlik Replicate. Strong knowledge of Snowflake database architecture and Type 2 SCD modeling. Expertise in containerized DB2, DB2, Oracle, and Hadoop data sources. Understanding of Change Data Capture (CDC) processes and real-time data replication patterns. Experience with SQL, Python, or Shell scripting for data transformations and automation. Tools/Skills: Talend, IBM Data Replicator, Qlik Replicate, SQL, Python Skills: ibm,sql,ibm data replicator,snowflake database architecture,etl development,db2,type 2 scd modeling,shell scripting,talend,snowflake,python,change data capture (cdc),hadoop,qlik replicate,data,oracle Show more Show less

Posted 1 day ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Experience Level: 5–8 years in testing SAS applications and data pipelines. Proficiency in SAS programming (Base SAS, Macro, SQL) and SQL query validation. Experience with data testing frameworks and tools for data validation and reconciliation. Knowledge of Snowflake and explicit pass-through SQL for data integration testing. Familiarity with Talend, IBM Data Replicator, and Qlik Replicate for ETL pipeline validation. Hands-on experience with test automation tools (e.g., Selenium, Python, or Shell scripts). Skills: data validation,sql query validation,shell scripts,macro,sql,ibm data replicator,etl pipeline validation,pass-through sql,sas programming,base sas,sas,data testing frameworks,talend,snowflake,selenium,python,testing,data reconciliation,qlik replicate,data,test automation tools Show more Show less

Posted 1 day ago

Apply

0.0 - 3.0 years

0 Lacs

Chennai, Tamil Nadu

On-site

Indeed logo

Job ID R-225846 Date posted 06/17/2025 Job Title: Senior Manager - Product Quality Engineering Leader Career Level - E Introduction to role: Join our Commercial IT Data Analytics & AI (DAAI) team as a Product Quality Leader, where you will play a pivotal role in ensuring the quality and stability of our data platforms built on AWS services, Databricks, and Snaplogic. Based in Chennai GITC, you will drive the quality engineering strategy, lead a team of quality engineers, and contribute to the overall success of our data platform. Accountabilities : As the Product Quality Team Leader for data platforms, your key accountabilities will include leadership and mentorship, quality engineering standards, collaboration, technical expertise, and innovation and process improvement. You will lead the design, development, and maintenance of scalable and secure data infrastructure and tools to support the data analytics and data science teams. You will also develop and implement data and data engineering quality assurance strategies and plans tailored to data product build and operations. Essential Skills/Experience: Bachelor’s degree or equivalent in Computer Engineering, Computer Science, or a related field Proven experience in a product quality engineering or similar role, with at least 3 years of experience in managing and leading a team. Experience of working within a quality and compliance environment and application of policies, procedures, and guidelines A broad understanding of cloud architecture (preferably in AWS) Strong experience in Databricks, Pyspark and the AWS suite of applications (like S3, Redshift, Lambda, Glue, EMR). Proficiency in programming languages such as Python Experienced in Agile Development techniques and Methodologies. Solid understanding of data modelling, ETL processes and data warehousing concepts Excellent communication and leadership skills, with the ability to collaborate effectively with the technical and non-technical stakeholders. Experience with big data technologies such as Hadoop or Spark Certification in AWS or Databricks. Prior significant experience working in Pharmaceutical or Healthcare industry IT environment. When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca, we are committed to disrupting an industry and changing lives. Our work has a direct impact on patients, transforming our ability to develop life-changing medicines. We empower the business to perform at its peak and lead a new way of working, combining cutting-edge science with leading digital technology platforms and data. We dare to lead, applying our problem-solving mindset to identify and tackle opportunities across the whole enterprise. Our spirit of experimentation is lived every day through our events like hackathons. We enable AstraZeneca to perform at its peak by delivering world-class technology and data solutions. Are you ready to be part of a team that has the backing to innovate, disrupt an industry and change lives? Apply now to join us on this exciting journey! AstraZeneca embraces diversity and equality of opportunity. We are committed to building an inclusive and diverse team representing all backgrounds, with as wide a range of perspectives as possible, and harnessing industry-leading skills. We believe that the more inclusive we are, the better our work will be. We welcome and consider applications to join our team from all qualified candidates, regardless of their characteristics. We comply with all applicable laws and regulations on non-discrimination in employment (and recruitment), as well as work authorization and employment eligibility verification requirements. Senior Manager - Product Quality Engineering Leader Posted date Jun. 17, 2025 Contract type Full time Job ID R-225846 APPLY NOW Why choose AstraZeneca India? Help push the boundaries of science to deliver life-changing medicines to patients. After 45 years in India, we’re continuing to secure a future where everyone can access affordable, sustainable, innovative healthcare. The part you play in our business will be challenging, yet rewarding, requiring you to use your resilient, collaborative and diplomatic skillsets to make connections. The majority of your work will be field based, and will require you to be highly-organised, planning your monthly schedule, attending meetings and calls, as well as writing up reports. Who do we look for? Calling all tech innovators, ownership takers, challenge seekers and proactive collaborators. At AstraZeneca, breakthroughs born in the lab become transformative medicine for the world's most complex diseases. We empower people like you to push the boundaries of science, challenge convention, and unleash your entrepreneurial spirit. You'll embrace differences and take bold actions to drive the change needed to meet global healthcare and sustainability challenges. Here, diverse minds and bold disruptors can meaningfully impact the future of healthcare using cutting-edge technology. Whether you join us in Bengaluru or Chennai, you can make a tangible impact within a global biopharmaceutical company that invests in your future. Join a talented global team that's powering AstraZeneca to better serve patients every day. Success Profile Ready to make an impact in your career? If you're passionate, growth-orientated and a true team player, we'll help you succeed. Here are some of the skills and capabilities we look for. 0% Tech innovators Make a greater impact through our digitally enabled enterprise. Use your skills in data and technology to transform and optimise our operations, helping us deliver meaningful work that changes lives. 0% Ownership takers If you're a self-aware self-starter who craves autonomy, AstraZeneca provides the perfect environment to take ownership and grow. Here, you'll feel empowered to lead and reach excellence at every level — with unrivalled support when you need it. 0% Challenge seekers Adapting and advancing our progress means constantly challenging the status quo. In this dynamic environment where everything we do has urgency and focus, you'll have the ability to show up, speak up and confidently take smart risks. 0% Proactive collaborators Your unique perspectives make our ambitions and capabilities possible. Our culture of sharing ideas, learning and improving together helps us consistently set the bar higher. As a proactive collaborator, you'll seek out ways to bring people together to achieve their best. Responsibilities Job ID R-225846 Date posted 06/17/2025 Job Title: Senior Manager - Product Quality Engineering Leader Career Level - E Introduction to role: Join our Commercial IT Data Analytics & AI (DAAI) team as a Product Quality Leader, where you will play a pivotal role in ensuring the quality and stability of our data platforms built on AWS services, Databricks, and Snaplogic. Based in Chennai GITC, you will drive the quality engineering strategy, lead a team of quality engineers, and contribute to the overall success of our data platform. Accountabilities : As the Product Quality Team Leader for data platforms, your key accountabilities will include leadership and mentorship, quality engineering standards, collaboration, technical expertise, and innovation and process improvement. You will lead the design, development, and maintenance of scalable and secure data infrastructure and tools to support the data analytics and data science teams. You will also develop and implement data and data engineering quality assurance strategies and plans tailored to data product build and operations. Essential Skills/Experience: Bachelor’s degree or equivalent in Computer Engineering, Computer Science, or a related field Proven experience in a product quality engineering or similar role, with at least 3 years of experience in managing and leading a team. Experience of working within a quality and compliance environment and application of policies, procedures, and guidelines A broad understanding of cloud architecture (preferably in AWS) Strong experience in Databricks, Pyspark and the AWS suite of applications (like S3, Redshift, Lambda, Glue, EMR). Proficiency in programming languages such as Python Experienced in Agile Development techniques and Methodologies. Solid understanding of data modelling, ETL processes and data warehousing concepts Excellent communication and leadership skills, with the ability to collaborate effectively with the technical and non-technical stakeholders. Experience with big data technologies such as Hadoop or Spark Certification in AWS or Databricks. Prior significant experience working in Pharmaceutical or Healthcare industry IT environment. When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca, we are committed to disrupting an industry and changing lives. Our work has a direct impact on patients, transforming our ability to develop life-changing medicines. We empower the business to perform at its peak and lead a new way of working, combining cutting-edge science with leading digital technology platforms and data. We dare to lead, applying our problem-solving mindset to identify and tackle opportunities across the whole enterprise. Our spirit of experimentation is lived every day through our events like hackathons. We enable AstraZeneca to perform at its peak by delivering world-class technology and data solutions. Are you ready to be part of a team that has the backing to innovate, disrupt an industry and change lives? Apply now to join us on this exciting journey! AstraZeneca embraces diversity and equality of opportunity. We are committed to building an inclusive and diverse team representing all backgrounds, with as wide a range of perspectives as possible, and harnessing industry-leading skills. We believe that the more inclusive we are, the better our work will be. We welcome and consider applications to join our team from all qualified candidates, regardless of their characteristics. We comply with all applicable laws and regulations on non-discrimination in employment (and recruitment), as well as work authorization and employment eligibility verification requirements. APPLY NOW Explore the local area Take a look at the map to see what’s nearby. Reasons to Join Thomas Mathisen Sales Representative Oslo, Norway Christine Recchio Sales Representative California, United States Stephanie Ling Sales Representative Petaling Jaya, Malaysia What we offer We're driven by our shared values of serving people, society and the planet. Our people make this possible, which is why we prioritise diversity, safety, empowerment and collaboration. Discover what a career at AstraZeneca could mean for you. Lifelong learning Our development opportunities are second to none. You'll have the chance to grow your abilities, skills and knowledge constantly as you accelerate your career. From leadership projects and constructive coaching to overseas talent exchanges and global collaboration programmes, you'll never stand still. Autonomy and reward Experience the power of shaping your career how you want to. We are a high-performing learning organisation with autonomy over how we learn. Make big decisions, learn from your mistakes and continue growing — with performance-based rewards as part of the package. Health and wellbeing An energised work environment is only possible when our people have a healthy work-life balance and are supported for their individual needs. That's why we have a dedicated team to ensure your physical, financial and psychological wellbeing is a top priority. Inclusion and diversity Diversity and inclusion are embedded in everything we do. We're at our best and most creative when drawing on our different views, experiences and strengths. That's why we're committed to creating a workplace where everyone can thrive in a culture of respect, collaboration and innovation.

Posted 1 day ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana

On-site

Indeed logo

Job Information Date Opened 06/17/2025 Job Type Full time Industry IT Services City Hyderabad State/Province Telangana Country India Zip/Postal Code 500081 About Us About DATAECONOMY: We are a fast-growing data & analytics company headquartered in Dublin with offices inDublin, OH, Providence, RI, and an advanced technology center in Hyderabad,India. We are clearly differentiated in the data & analytics space via our suite of solutions, accelerators, frameworks, and thought leadership. Job Description Job Title: Technical Project Manager Location: Hyderabad Employment Type: Full-time Experience: 10+ years Domain: Banking and Insurance We are seeking a Technical Project Manager to lead and coordinate the delivery of data-centric projects. This role bridges the gap between engineering teams and business stakeholders, ensuring the successful execution of technical initiatives, particularly in data infrastructure, pipelines, analytics, and platform integration. Responsibilities: Lead end-to-end project management for data-driven initiatives, including planning, execution, delivery, and stakeholder communication. Work closely with data engineers, analysts, and software developers to ensure technical accuracy and timely delivery of projects. Translate business requirements into technical specifications and work plans. Manage project timelines, risks, resources, and dependencies using Agile, Scrum, or Kanban methodologies. Drive the development and maintenance of scalable ETL pipelines, data models, and data integration workflows. Oversee code reviews and ensure adherence to data engineering best practices. Provide hands-on support when necessary, in Python-based development or debugging. Collaborate with cross-functional teams including Product, Data Science, DevOps, and QA. Track project metrics and prepare progress reports for stakeholders. Requirements Required Qualifications: Bachelor’s or master’s degree in computer science, Information Systems, Engineering, or related field. 10+ years of experience in project management or technical leadership roles. Strong understanding of modern data architectures (e.g., data lakes, warehousing, streaming). Experience working with cloud platforms like AWS, GCP, or Azure. Familiarity with tools such as JIRA, Confluence, Git, and CI/CD pipelines. Strong communication and stakeholder management skills. Benefits Company standard benefits.

Posted 1 day ago

Apply

2.0 years

0 Lacs

Port Blair, Andaman and Nicobar Islands, India

On-site

Linkedin logo

Job Title: Database Developer Location: Calicut, Kerala (On-site) Experience: Minimum 2 Years Job Type: Full-time Notice: immediate/15 days Candidates from Kerala are highly preferred. Job Summary: We are hiring a skilled and detail-oriented Database Developer with at least 2+ years of experience to join our team in Calicut. The ideal candidate will have hands-on expertise in SQL and PostgreSQL, with a strong understanding of database design, development, and performance optimization. Experience with Azure cloud services is a plus. Key Responsibilities: Design, develop, and maintain database structures, stored procedures, functions, and triggers Write optimized SQL queries for integration with applications and reporting tools Ensure data integrity, consistency, and security across platforms Monitor and tune database performance for high availability and scalability Collaborate with developers and DevOps teams to support application development Maintain and update technical documentation related to database structures and processes Assist in data migration and backup strategies Work with cloud-based databases and services (preferably on Azure) Required Skills & Qualifications: Bachelor's degree in Computer Science, Information Technology, or related field Minimum 2 years of experience as a Database Developer or similar role Strong expertise in SQL and PostgreSQL database development Solid understanding of relational database design and normalization Experience in writing complex queries, stored procedures, and performance tuning Familiarity with version control systems like Git Strong analytical and troubleshooting skills Preferred Qualifications: Experience with Azure SQL Database, Data Factory, or related services Knowledge of data warehousing and ETL processes Exposure to NoSQL or other modern database technologies is a plus Show more Show less

Posted 1 day ago

Apply

3.0 - 6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title : Epic Reporting and BI Analyst Location : Hyderabad Experience : 3 to 6 Years Job Type : Full-Time Department : IT/Healthcare Technology Job Summary We are looking for a detail-oriented and experienced Epic Reporting and BI Analyst to provide essential support within our healthcare IT environment. The ideal candidate will have hands-on experience with Epic Systems, particularly Epic Caboodle and Epic Cogito, and be able to support clinical teams by resolving issues, optimizing workflows, and ensuring the smooth operation of Epic applications. Key Responsibilities Develop and manage Epic reports and build reports for Epic Caboodle, Epic Cogito, and other Epic modules. Serve as the primary point of contact for clinicians and end-users facing technical challenges with Epic systems. Identify, document, and escalate complex issues to senior support teams as needed. Collaborate with IT, data analysts, and clinical stakeholders to maintain the integrity, availability, and performance of Epic applications. Assist in the deployment of system upgrades, enhancements, and configuration changes to enhance the user experience. Support data extraction, reporting, and analytics using Epic Cogito and Caboodle, ensuring timely and accurate delivery of healthcare data. Stay up to date with Epic best practices and offer training and guidance to end-users when necessary. Monitor system performance, proactively identifying and resolving potential issues that could disrupt clinical workflows. Adhere to ITIL service management practices, including incident, problem, and change management. Document troubleshooting procedures and create knowledge base articles to improve service desk efficiency. Required Qualifications Bachelor's degree in healthcare informatics, Information Systems, Computer Science, or a related field, or equivalent work experience. Minimum of 5 years of experience working with Epic Systems in a support or analyst capacity. Strong proficiency in Epic Caboodle and Epic Cogito, with experience in data modelling, ETL processes, and reporting. Familiarity with Epics clinical modules and their impact on patient care workflows. Experience with SQL queries, reporting tools (Analytics & BI), and business intelligence platforms is a plus. Proven ability to diagnose, troubleshoot, and resolve technical issues related to Epic systems. Excellent communication skills and the ability to work effectively with clinicians, IT teams, and business stakeholders. Epic certifications in relevant modules (Caboodle, Cogito, or others) are highly preferred. Experience in Data Modelling and Data Warehouse & experience with SQL Server Integration Services and SQL Server Reporting Services. Preferred Qualifications Experience working in a healthcare IT setting or supporting clinical service desks. Knowledge of IT ticketing systems such as ServiceNow or Jira. Understanding of healthcare compliance regulations (HIPAA, HITECH, etc.). Ability to thrive in a fast-paced environment and demonstrate strong problem-solving abilities. (ref:hirist.tech) Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Responsibilities Technical Configuration and Build : Lead the configuration and build of Workday SCM modules (Procurement, Inventory Management, Supplier Management, and Logistics) to meet client-specific business processes. This includes configuring security, business processes, reporting, and data loads. System Integration : Work closely with the integration team to design and implement integrations between Workday SCM and other systems (internal and third-party), using Workdays integration tools such as EIB, Workday Studio, and Workday Web Services (WWS). Customization and Development : Develop and customize solutions within Workday SCM, including the creation of calculated fields, reports, dashboards, and custom business processes. Troubleshooting and Technical Support : Provide ongoing technical support for Workday SCM modules, identifying and resolving complex system issues, bugs, and performance bottlenecks. Data Migration & Load : Lead the extraction, transformation, and loading (ETL) of data into Workday SCM, ensuring data integrity and consistency across all modules and integrations. Testing & Quality Assurance : Oversee the testing process, including unit testing, integration testing, and regression testing for SCM-related configurations and integrations. Ensure that the system meets performance and functionality expectations. Technical Documentation : Maintain detailed technical documentation on system configurations, customizations, integrations, and data migration processes for future reference and auditing purposes. Collaboration with Technical Teams : Partner with Workday HCM, IT, and other cross-functional teams to ensure seamless integration of Workday SCM with other enterprise systems (e.g., Finance, HR) and support multi-system projects. Process Optimization : Identify opportunities for process improvement and automation within the Workday SCM system, working closely with the business and functional teams to deliver optimized solutions. Qualifications Workday SCM Technical Expertise : Strong technical experience with the configuration, customization, and integration of Workday Supply Chain Management modules. Expertise in Procurement, Inventory, and Supplier Management modules is highly desirable. Workday Integration Tools : Proficiency in Workday integration tools such as EIB (Enterprise Interface Builder), Workday Studio, and Web Services (WWS). Ability to design and implement complex integrations with third-party systems. Data Management : Strong experience in data migration, data transformation, and data loading techniques in Workday using EIB, iLoad, or similar tools. Familiarity with the Workday Data Model is a plus. Technical Problem Solving : Ability to troubleshoot, identify root causes, and resolve complex technical issues within Workday SCM. Strong analytical and problem-solving skills are essential. Technical Reporting : Experience in creating and configuring custom reports and dashboards in Workday, including the use of Workday's reporting tools such as Workday Report Writer and Workday Prism Analytics. Programming/Development Skills : Experience with Workdays advanced calculated fields, custom business process configuration, and scripting (e.g., Workday BIRT reports, WSDL, REST APIs) is highly preferred. Project Management Skills : Experience with agile project management methodologies and the ability to handle multiple concurrent tasks, deadlines, and priorities. Communication Skills : Strong written and verbal communication skills to convey technical information to both technical and non-technical stakeholders. Preferred Skills Workday Certifications : Workday certifications in SCM or related modules (Procurement, Inventory, etc.) are highly desirable. Cloud/ERP Experience : Hands-on experience with other cloud-based ERP systems or supply chain management systems, and understanding of cloud integrations. Experience with Advanced Workday Functionality : Familiarity with advanced Workday functionalities such as Workday Cloud Platform, Workday Studio, and Workday Reporting. Licensure and Certification : Workday Integration (Added Advantage) (ref:hirist.tech) Show more Show less

Posted 1 day ago

Apply

10.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Our Company Techvantage.ai is a next-generation technology and product engineering company at the forefront of innovation in Generative AI, Agentic AI, and autonomous intelligent systems. We design intelligent platforms that solve complex business problems and deliver measurable impact through cutting-edge AI Overview : We are seeking an experienced Solution Architect with a strong foundation in software architecture and a working knowledge of AI-based products or platforms. In this role, you will be responsible for designing robust, scalable, and secure architectures that support AI-driven applications and enterprise systems. You will work closely with cross-functional teamsincluding data scientists, product managers, and engineering leadsto bridge the gap between business needs, technical feasibility, and AI we are looking from an ideal candidate ? Architect end-to-end solutions for enterprise and product-driven platforms, including components such as data pipelines, APIs, AI model integration, cloud infrastructure, and user interfaces. Guide teams in selecting the right technologies, tools, and design patterns to build scalable systems. Collaborate with AI/ML teams to understand model requirements and ensure smooth deployment and integration into production. Define system architecture diagrams, data flow, service orchestration, and infrastructure provisioning using modern tools. Work closely with stakeholders to translate business needs into technical solutions, with a focus on scalability, performance, and security. Provide leadership on best practices for software development, DevOps, and cloud-native architecture. Conduct architecture reviews and ensure alignment with security, compliance, and performance Skills : What skills do you need ? Requirements 10+ years of experience in software architecture or solution design roles. Proven experience designing systems using microservices, RESTful APIs, event-driven architecture, and cloud-native technologies. Hands-on experience with at least one major cloud provider: AWS, GCP, or Azure. Familiarity with AI/ML platforms or components, such as integrating AI models, MLOps pipelines, or inference services. Understanding of data architectures, including data lakes, streaming, and ETL pipelines. Strong experience with containerization (Docker, Kubernetes) and DevOps principles. Ability to lead technical discussions, make design trade-offs, and communicate with both technical and non-technical Qualifications : Exposure to AI model lifecycle management, prompt engineering, or real-time inference workflows. Experience with infrastructure-as-code (Terraform, Pulumi). Knowledge of GraphQL, gRPC, or serverless architectures. Previous experience working in AI-driven product companies or digital transformation We Offer : High-impact role in designing intelligent systems that shape the future of AI adoption. Work with forward-thinking engineers, researchers, and innovators. Strong focus on career growth, learning, and technical leadership. Compensation is not a constraint for the right candidate. (ref:hirist.tech) Show more Show less

Posted 1 day ago

Apply

3.0 - 8.0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Role : Data & Visualisation Engineer Location : Kochi, Onsite Experience : 3-8 years Availability : Immediate Joiners Job Description We are seeking a Data & Visualization Engineer with 3 to 8 years of experience to design, develop, and maintain scalable data pipelines while creating intuitive and interactive visualizations. This role requires a strong foundation in data engineering, ETL processes, and data visualization tools to support seamless data-driven decision-making. Key Responsibilities Data Engineering : Design, develop, and optimize scalable data pipelines for structured and unstructured data. Implement and manage ETL/ELT processes to ensure data quality and consistency. Work with databases, data warehouses, and data lakes (MySQL, NoSQL, Redshift, Snowflake, BigQuery, etc.). Ensure compliance with data security, governance, and best practices. Optimize data workflows using Azure cloud platforms. Data Visualization & Analytics Develop interactive dashboards and reports using BI tools (Tableau, Power BI, Looker, D3.js). Transform complex datasets into clear, visually compelling insights. Collaborate with stakeholders to understand business needs and create impactful visualizations. Integrate visual analytics into web applications using JavaScript frameworks. Ensure accuracy, usability, and optimal performance of visualizations. Stay up to date with best practices in data storytelling and UX. Required Skills & Qualifications Bachelors or Masters degree in Computer Science, Data Engineering, or a related field. Proficiency in Python, SQL, Scala, or Java. Strong knowledge of MySQL and other database technologies. Hands-on experience with cloud data services. Expertise in BI tools (Tableau, Power BI, Looker) and front-end frameworks (React, D3.js). Strong analytical, problem-solving, and communication skills. Experience working with large datasets and optimizing query performance. Familiarity with version control (Git) and CI/CD pipelines. (ref:hirist.tech) Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Linkedin logo

Key Responsibilities Develop and execute test cases for ETL processes and data migration across large datasets. Perform data validation and validation of source-to-target data mappings using advanced SQL. Collaborate with developers, Business Analysts (BAs), and Quality Assurance (QA) teams to ensure data quality and integrity. Report and track defects, ensuring timely resolution to maintain data accuracy and quality. Automate and optimize data workflows and ETL pipelines. Monitor the performance of data pipelines and troubleshoot any data issues as needed. Maintain detailed documentation for data processes, workflows, and system architecture. Ensure the data quality, integrity, and security across Skills & Qualifications : Experience in ETL/Data Warehouse testing or a similar role. Strong proficiency in SQL with a solid understanding of database concepts. Hands-on experience with ETL tools like Informatica, Talend, SSIS, or similar. Experience with data warehousing platforms (e.g., Snowflake) and performance tuning. Experience in defect tracking and issue management using tools like JIRA. Familiarity with version control systems (e.g., Git) and CI/CD practices. Good communication, collaboration, and documentation skills. Solid understanding of data warehousing principles and ETL process design. (ref:hirist.tech) Show more Show less

Posted 1 day ago

Apply

3.0 years

0 Lacs

Delhi, India

On-site

Linkedin logo

Location : Work from Office (Okhla Phase III, Okhla Industrial Estate, New Delhi - 110020) About Us 1buy.ai is an innovative procurement and pricing intelligence platform specialized in the electronics industry, leveraging cutting-edge technology to streamline and optimize component sourcing. We are seeking a Data Engineer to join our dynamic tech team. You'll play a critical role in managing, transforming, and delivering data-driven insights to empower our platform and decision-making. Key Responsibilities Develop and optimize complex SQL queries for data extraction and analytics. Build and maintain robust ETL/ELT pipelines for data ingestion and transformation. Collaborate closely with stakeholders to support data visualization initiatives. Ensure data accuracy, integrity, and availability across various databases (MongoDB, PostgreSQL, Clickhouse). Monitor, maintain, and enhance data solutions leveraging Grafana dashboards. Continuously improve data processes and efficiency, aligning with business and industry needs. Required Skills 3+ years of experience and proficiency in writing and optimizing complex SQL queries. Strong expertise in ETL/ELT processes and data pipeline frameworks. Familiarity With Data Visualization tools and frameworks (Grafana). Databases : MongoDB (NoSQL), PostgreSQL. Electronics industry and component types. Optional Skills AWS cloud services for deploying data solutions Python scripting and automation. Data warehousing databases (Clickhouse). Workflow orchestration tools (Apache Airflow). What We Offer A collaborative and innovative environment. Opportunities for professional growth within a high-growth startup. Exposure to industry-leading data technologies and best practices. (ref:hirist.tech) Show more Show less

Posted 1 day ago

Apply

7.5 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : PySpark Good to have skills : NA Minimum 7.5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge in data engineering. - Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark. - Strong understanding of data pipeline architecture and design. - Experience with ETL processes and data integration techniques. - Familiarity with data quality frameworks and best practices. - Knowledge of cloud platforms and services related to data storage and processing. Additional Information: - The candidate should have minimum 7.5 years of experience in PySpark. - This position is based in Pune. - A 15 years full time education is required. 15 years full time education Show more Show less

Posted 1 day ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Key Responsibilities Develop, optimize, and maintain MSSQL databases to support business operations. Design, build, and maintain cloud database solutions to ensure efficient data storage and retrieval. Perform data migration between on-premise and cloud databases, ensuring data integrity and security. Develop and optimize complex SQL queries, stored procedures, functions, and triggers for data analysis and reporting. Implement reporting automation solutions using BI tools, SQL, and scripting techniques to enhance data-driven decision-making. Work on data warehousing solutions to store and analyze structured and unstructured business data efficiently. Ensure data quality, governance, and security by implementing best practices. Collaborate with business stakeholders to understand data requirements and provide actionable insights. Work independently under strict deadlines and manage multiple tasks Skills & Qualifications : 10+ years of experience in SQL development, database management, and data analysis. Expertise in MSSQL Server development, including query optimization and performance tuning. Hands-on experience with cloud databases (AWS RDS, Azure SQL, Google Cloud SQL, etc.). Strong knowledge of data migration strategies and tools. Experience in designing and implementing automated reporting solutions using SQL and BI tools (Power BI, Tableau, etc.). Solid understanding of ETL processes and data warehousing concepts. Ability to analyze large datasets and provide meaningful insights. Strong problem-solving skills, with attention to detail. Excellent communication skills and ability to work independently under tight Functional and Behavioral Characteristics : Positive and professional attitude. Self-motivated with a proactive approach to tasks. Team player with a willingness to collaborate and support colleagues. Strong commitment to accuracy and attention to detail. What We Have To Offer Competitive salary and benefits package. Work in a fast-growing financial services company with a dynamic work environment. Opportunity to work on cutting-edge database and reporting automation solutions. Career growth opportunities and professional development support. (ref:hirist.tech) Show more Show less

Posted 1 day ago

Apply

0.0 - 2.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

The Applications Development Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Responsibilities: Utilize knowledge of applications development procedures and concepts, and basic knowledge of other technical areas to identify and define necessary system enhancements Identify and analyze issues, make recommendations, and implement solutions Utilize knowledge of business processes, system processes, and industry standards to solve complex issues Analyze information and make evaluative judgements to recommend solutions and improvements Conduct testing and debugging, utilize script tools, and write basic code for design specifications Assess applicability of similar experiences and evaluate options under circumstances not covered by procedures Develop working knowledge of Citi’s information systems, procedures, standards, client server application development, network operations, database administration, systems administration, data center operations, and PC-based applications Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Additional Job Description We are looking for a Big Data Engineer that will work on the collecting, storing, processing, and analyzing of huge sets of data. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them. You will also be responsible for integrating them with the architecture used across the company. Responsibilities Selecting and integrating any Big Data tools and frameworks required to provide requested capabilities Implementing data wrangling, scarping, cleaning using both Java or Python Strong experience on data structure. Extensively work on API integration. Monitoring performance and advising any necessary infrastructure changes Defining data retention policies Skills And Qualifications Proficient understanding of distributed computing principles Proficient in Java or Pyhton and some part of machine learning Proficiency with Hadoop v2, MapReduce, HDFS,Pyspark,Spark Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala Experience with Spark Experience with integration of data from multiple data sources Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge of various ETL techniques and frameworks, such as Flume Experience with various messaging systems, such as Kafka or RabbitMQ Experience with Big Data ML toolkits, such as Mahout, SparkML, or H2O Good understanding of Lambda Architecture, along with its advantages and drawbacks Experience with Cloudera/MapR/Hortonworks Qualifications: 0-2 years of relevant experience Experience in programming/debugging used in business applications Working knowledge of industry practice and standards Comprehensive knowledge of specific business area for application development Working knowledge of program languages Consistently demonstrates clear and concise written and verbal communication Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less

Posted 1 day ago

Apply

8.0 - 12.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Major Duties Monitor the production environment. Identify and implement opportunities to improve production stability. Ensure incidents are prioritized and worked on in proper order and review backlog items. Investigating, diagnosing, and solving application issues. Problem resolution in an analytical and logical manner, to troubleshoot root cause and resolve production incidents. Follow-up on cross-team incidents to drive to resolution. Developing and delivering product changes, enhancements in a collaborative, agile team environment. Build solutions to fix production issues and participate in ongoing software maintenance activities. Understand, define, estimate, develop, test, deploy and support change requests. Monitor and attend to all alerts and escalate production issues as needed to relevant teams and management. Operates independently; has in-depth knowledge of business unit / function. Communicate with stakeholders and business on escalated items. As subject area expert, provides comprehensive, in-depth consulting to team and partners at a high technical level. Develops periodic goals, organizes the work, sets short-term priorities, monitors all activities, and ensures timely and accurate completion of the work. Periodically engage with business partners to review progress and priorities and develop and maintain rapport through professional interactions with clear, concise communications. Ensure cross-functional duties, including bug fixes & scheduling changes etc. are scheduled and completed by the relevant teams. Work with the team to resolve problems, improve production reliability, stability, and availability. Follow the ITIL processes of Incident, Problem & Change Management. Ability to solve complex technical Have : 8 -12 years of professional experience in software maintenance / support / development with Programming / Strong Technical background. 80% Technical and 20% Manager skills. Proficient in working with ITIL / ITSM (Service Now) & Data Analysis. Expert on Unix commands and Scripting. Working knowledge of SQL (Preferably Oracle, MSSQL). Experience in supporting ETL/EDM/MDM Platform using tools like SSIS, Informatica, Markit EDM, IBM Infosphere DataStage ETL experience is mandate if EDM experience is not present. Understanding of batch scheduling system usage and implementation concepts. Trigger solutions using external schedulers (Control-M), services (Process Launchers & Event Watchers) and UI. Well versed with Change Management process and tools. Experience in incident management, understanding of ticket workflows and use of escalation. Good understanding of MQ/Kafka (both consumer/producer solutions). Good understanding of Rest/SOAP to Have : Proficient in Java and able to go into code to investigate and fix issues. Understanding of DevOps, CICD & Agile techniques preferred. Basic understanding of front-end technologies, such as React JS, JavaScript, HTML5, and CSS3. Banking and Financial Services Knowledge is preferred. More importantly, the candidate should have a strong technical background. (ref:hirist.tech) Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Position Overview We are seeking a highly skilled and motivated Data Engineer to join our dynamic team. The ideal candidate will have extensive experience with AWS Glue, Apache Airflow, Kafka, SQL, Python and DataOps tools and technologies. Knowledge of SAP HANA & Snowflake is a plus. This role is critical for designing, developing, and maintaining our clients data pipeline architecture, ensuring the efficient and reliable flow of data across the organization. Key Responsibilities Design, Develop, and Maintain Data Pipelines : Develop robust and scalable data pipelines using AWS Glue, Apache Airflow, and other relevant technologies. Integrate various data sources, including SAP HANA, Kafka, and SQL databases, to ensure seamless data flow and processing. Optimize data pipelines for performance and reliability. Data Management And Transformation Design and implement data transformation processes to clean, enrich, and structure data for analytical purposes. Utilize SQL and Python for data extraction, transformation, and loading (ETL) tasks. Ensure data quality and integrity through rigorous testing and validation processes. Collaboration And Communication Work closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions that meet their needs. Collaborate with cross-functional teams to implement DataOps practices and improve data life cycle management. Monitoring And Optimization Monitor data pipeline performance and implement improvements to enhance efficiency and reduce latency. Troubleshoot and resolve data-related issues, ensuring minimal disruption to data workflows. Implement and manage monitoring and alerting systems to proactively identify and address potential issues. Documentation And Best Practices Maintain comprehensive documentation of data pipelines, transformations, and processes. Adhere to best practices in data engineering, including code versioning, testing, and deployment procedures. Stay up-to-date with the latest industry trends and technologies in data engineering and DataOps. Required Skills And Qualifications Technical Expertise : Extensive experience with AWS Glue for data integration and transformation. Proficient in Apache Airflow for workflow orchestration. Strong knowledge of Kafka for real-time data streaming and processing. Advanced SQL skills for querying and managing relational databases. Proficiency in Python for scripting and automation tasks. Experience with SAP HANA for data storage and management. Familiarity with DataOps tools and methodologies for continuous integration and delivery in data engineering. Preferred Skills Knowledge of Snowflake for cloud-based data warehousing solutions. Experience with other AWS data services such as Redshift, S3, and Athena. Familiarity with big data technologies such as Hadoop, Spark, and Hive. Soft Skills Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Detail-oriented with a commitment to data quality and accuracy. Ability to work independently and manage multiple projects simultaneously. (ref:hirist.tech) Show more Show less

Posted 1 day ago

Apply

4.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Summary We are looking for a skilled and experienced Spring Boot Developer with 4+ years of hands-on experience to join our development team. The ideal candidate should have a strong background in Java development, with a focus on building robust and scalable applications using Spring Boot framework. As a Spring Boot Developer, you will be responsible for designing, developing, and implementing software solutions while collaborating with cross-functional teams to deliver high-quality products. As a Lead Spring Boot Developer, you will lead a team, design software solutions, and collaborate with cross-functional teams to drive innovation. Key Responsibilities Design, develop, and implement high-quality, scalable, and efficient Java applications using Spring Boot framework. Collaborate with developers, product, and project owners to understand requirements and translate them into technical solutions. Understanding of security best practices and compliance requirements Work closely with stakeholders to understand requirements and translate them into technical solutions. Exposure to healthcare standards like HL7, FHIR, and EDI X12 is highly desirable. Experience in HIPAA-compliant application development. Industry Exposure : Healthcare, Insurance, Fintech. Designing High-Level (HLD) and Low-Level (LLD) architectures, defining database schemas is a must. Lead code reviews, unit testing, and system integration testing. Good to have exposure to Data pipelines and ETL workloads. Requirements Bachelor's degree in computer science, Engineering, or a related field. 5+ years of hands-on experience in Java development. 2+ years of experience specifically with Spring Boot framework. Proficiency in Java, Spring Boot, Spring Framework (Core, MVC, Security, etc.). Experience with RESTful APIs and microservices architecture. Strong understanding of object-oriented programming principles. Familiarity with front-end technologies such as JavaScript, HTML, and CSS is a plus. Experience with relational databases and SQL. Knowledge of Agile development methodologies. Excellent problem-solving and analytical skills. Ability to lead and mentor a team, fostering collaboration and growth. Ability to work both independently and collaboratively in a team environment. Strong communication and interpersonal skills. (ref:hirist.tech) Show more Show less

Posted 1 day ago

Apply

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Description Design, build, and maintain scalable and efficient data pipelines to move data between cloud-native databases (e.g., Snowflake) and SaaS providers using AWS Glue and Python Implement and manage ETL/ELT processes to ensure seamless data integration and transformation Ensure information security and compliance with data governance standards Maintain and enhance data environments, including data lakes, warehouses, and distributed processing systems Utilize version control systems (e.g., GitHub) to manage code and collaborate effectively with the team Primary Skills Enhancements, new development, defect resolution, and production support of ETL development using AWS native services Integration of data sets using AWS services such as Glue and Lambda functions. Utilization of AWS SNS to send emails and alerts Authoring ETL processes using Python and PySpark ETL process monitoring using CloudWatch events Connecting with different data sources like S3 and validating data using Athena. Experience in CI/CD using GitHub Actions Proficiency in Agile methodology Extensive working experience with Advanced SQL and a complex understanding of SQL. Secondary Skills Experience working with Snowflake and understanding of Snowflake architecture, including concepts like internal and external tables, stages, and masking policies. Competencies / Experience Deep technical skills in AWS Glue (Crawler, Data Catalog): 10+ years. Hands-on experience with Python and PySpark: 5+ years. PL/SQL experience: 5+ years CloudFormation and Terraform: 5+ years CI/CD GitHub actions: 5+ year Experience with BI systems (PowerBI, Tableau): 5+ year Good understanding of AWS services like S3, SNS, Secret Manager, Athena, and Lambda: 5+ years Additionally, familiarity with any of the following is highly desirable: Jira, Gi (ref:hirist.tech) Show more Show less

Posted 1 day ago

Apply

3.0 - 4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Data Engineer (3-4 Years Experience) - Real-time & Batch Processing | AWS, Kafka, Click House, Python Location : NOIDA. Experience : 3-4 years. Job Type : Full-Time. About The Role We are looking for a skilled Data Engineer with 3-4 years of experience to design, build, and maintain real-time and batch data pipelines for handling large-scale datasets. You will work with AWS, Kafka, Cloudflare Workers, Python, Click House, Redis, and other modern technologies to enable seamless data ingestion, transformation, merging, and storage. Bonus: If you have Web Data Analytics or Programmatic Advertising knowledge, it will be a big plus!. Responsibilities Real-Time Data Processing & Transformation : Build low-latency, high-throughput real-time pipelines using Kafka, Redis, Firehose, Lambda, and Cloudflare Workers. Perform real-time data transformations like filtering, aggregation, enrichment, and deduplication using Kafka Streams, Redis Streams, or AWS Lambda. Merge data from multiple real-time sources into a single structured dataset for analytics. Batch Data Processing & Transformation Develop batch ETL/ELT pipelines for processing large-scale structured and unstructured data. Perform data transformations, joins, and merging across different sources in Click House, AWS Glue, or Python. Optimize data ingestion, transformation, and storage workflows for efficiency and reliability. Data Pipeline Development & Optimization Design, develop, and maintain scalable, fault-tolerant data pipelines for real-time & batch processing. Optimize data workflows to reduce latency, cost, and compute load. Data Integration & Merging Combine real-time and batch data streams for unified analytics. Integrate data from various sources (APIs, databases, event streams, cloud storage). Cloud Infrastructure & Storage Work with AWS services (S3, EC2, ECS, Lambda, Firehose, RDS, Redshift, ClickHouse) for scalable data processing. Implement data lake and warehouse solutions using S3, Redshift, and ClickHouse. Data Visualization & Reporting Work with Power BI, Tableau, or Grafana to create real-time dashboards and analytical reports. Web Data Analytics & Programmatic Advertising (Big Plus!) : Experience working with web tracking data, user behavior analytics, and digital marketing datasets. Knowledge of programmatic advertising, ad impressions, clickstream data, and real-time bidding (RTB) analytics. Monitoring & Performance Optimization Implement monitoring & logging of data pipelines using AWS CloudWatch, Prometheus, and Grafana. Tune Kafka, Click House, and Redis for high performance. Collaboration & Best Practices Work closely with data analysts, software engineers, and DevOps teams to enhance data accessibility. Follow best practices for data governance, security, and compliance. Must-Have Skills Programming : Strong experience in Python and JavaScript. Real-time Data Processing & Merging : Expertise in Kafka, Redis, Cloudflare Workers, Firehose, Lambda. Batch Processing & Transformation : Experience with Click House, Python, AWS Glue, SQL-based transformations. Data Storage & Integration : Experience with MySQL, Click House, Redshift, and S3-based storage. Cloud Technologies : Hands-on with AWS (S3, EC2, ECS, RDS, Firehose, Click House, Lambda, Redshift). Visualization & Reporting : Knowledge of Power BI, Tableau, or Grafana. CI/CD & Infrastructure as Code (IaC) : Familiarity with Terraform, CloudFormation, Git, Docker, and Kubernetes. (ref:hirist.tech) Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Spendflo is a fast-growing Series A startup helping companies streamline how they procure, manage, and optimize their software and services. Backed by top-tier investors, were building the most intelligent, automated platform for procurement operations. We are now looking for a Senior Data Engineer to design, build, and scale our data infrastructure. Youll be the backbone of all data movement at Spendflo from ingestion to transformation to Youll Do : Design, implement, and own the end-to-end data architecture at Spendflo. Build and maintain robust, scalable ETL/ELT pipelines across multiple sources and systems. Develop and optimize data models for analytics, reporting, and product needs. Own the reporting layer and work with PMs, analysts, and leadership to deliver actionable data. Ensure data quality, consistency, and lineage through validation and monitoring. Collaborate with engineering, product, and data science teams to build seamless data flows. Optimize data storage and query performance for scale and speed. Own documentation for pipelines, models, and data flows. Stay current with the latest data tools and bring in the right technologies. Mentor junior data engineers and help establish data best Qualifications : 5+ years of experience as a data engineer, preferably in a product/startup environment . Strong expertise in building ETL/ELT pipelines using modern frameworks (e.g., Dagster, dbt, Airflow). Deep knowledge of data modeling (star/snowflake schemas, denormalization, dimensional modeling). Hands-on with SQL (advanced queries, performance tuning, window functions, etc.). Experience with cloud data warehouses like Redshift, BigQuery, Snowflake, or similar. Comfortable working with cloud platforms (AWS/GCP/Azure) and tools like S3, Lambda, etc. Exposure to BI tools like Looker, Power BI, Tableau, or equivalent. Strong debugging and performance tuning skills. Excellent communication and documentation Qualifications : Built or managed large-scale, cloud-native data pipelines. Experience with real-time or stream processing (Kafka, Kinesis, etc.). Understanding of data governance, privacy, and security best practices. Exposure to machine learning pipelines or collaboration with data science teams. Startup experience able to handle ambiguity, fast pace, and end-to-end ownership. (ref:hirist.tech) Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Delhi, India

On-site

Linkedin logo

Key Responsibilities Design, build, and maintain scalable, reliable, and efficient data pipelines to support data analytics and business intelligence needs. Optimize and automate data workflows, enhancing the efficiency of data processing and reducing latency. Implement and maintain data storage solutions, ensuring that data is organized, secure, and readily accessible. Provide expertise in ETL processes, data wrangling, and data transformation techniques. Collaborate with technology teams to ensure that data engineering solutions align with overall business goals. Stay current with industry best practices and emerging technologies in data engineering, implementing improvements as : Bachelors or Masters degree in Computer Science, Information Technology, Engineering, or a related field. Experience with Agile methodologies and software development project : Proven experience in data engineering, with expertise in building and managing data pipelines, ETL processes, and data warehousing. Proficiency in SQL, Python, and other programming languages commonly used in data engineering. Experience with cloud platforms such as AWS, Azure, or Google Cloud, and familiarity with cloud-based data storage and processing tools (e.g., S3, Redshift, BigQuery, etc.). Good to have familiarity with big data technologies (e.g., Hadoop, Spark) and real-time data processing. Strong understanding of database management systems and data modeling techniques. Experience with BI tools like Tableau, Power BI along with ETL tools like Alteryx, or similar, and ability to work closely with analytics teams. High attention to detail and commitment to data quality and accuracy. Ability to work independently and as part of a team, with strong collaboration skills. Highly adaptive and comfortable working within a complex, fast-paced environment. (ref:hirist.tech) Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Delhi, India

On-site

Linkedin logo

What Youll Do Architect and scale modern data infrastructure: ingestion, transformation, warehousing, and access Define and drive enterprise data strategygovernance, quality, security, and lifecycle management Design scalable data platforms that support both operational insights and ML/AI applications Translate complex business requirements into robust, modular data systems Lead cross-functional teams of engineers, analysts, and developers on large-scale data initiatives Evaluate and implement best-in-class tools for orchestration, warehousing, and metadata management Establish technical standards and best practices for data engineering at scale Spearhead integration efforts to unify data across legacy and modern platforms What You Bring Experience in data engineering, architecture, or backend systems Strong grasp of system design, distributed data platforms, and scalable infrastructure Deep hands-on experience with cloud platforms (AWS, Azure, or GCP) and tools like Redshift, BigQuery, Snowflake, S3, Lambda Expertise in data modeling (OLTP/OLAP), ETL pipelines, and data warehousing Experience with big data ecosystems: Kafka, Spark, Hive, Presto Solid understanding of data governance, security, and compliance frameworks Proven track record of technical leadership and mentoring Strong collaboration and communication skills to align tech with business Bachelors or Masters in Computer Science, Data Engineering, or a related field Nice To Have (Your Edge) Experience with real-time data streaming and event-driven architectures Exposure to MLOps and model deployment pipelines Familiarity with data DevOps and Infra as Code (Terraform, CloudFormation, CI/CD pipelines) (ref:hirist.tech) Show more Show less

Posted 1 day ago

Apply

68.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Role Overview We are looking for a highly skilled and motivated Senior Data Scientist to join our team. In this role, you will design, develop, and implement advanced data models and algorithms that drive strategic decision-making across the organization. You will work closely with product, engineering, and business teams to uncover insights and deliver data-driven solutions that enhance the performance and scalability of our products and services. Key Responsibilities Develop, deploy, and maintain machine learning models and advanced analytics pipelines. Analyze complex datasets to identify trends, patterns, and actionable insights. Collaborate with cross-functional teams (Engineering, Product, Marketing) to define and execute data science strategies. Build and improve predictive models using supervised and unsupervised learning techniques. Translate business problems into data science projects with measurable impact. Design and conduct experiments, A/B tests, and statistical analyses to validate hypotheses and guide product development. Create dashboards and visualizations to communicate findings to technical and non-technical stakeholders. Stay up-to-date with industry trends, best practices, and emerging technologies in data science and machine learning. Ensure data quality and governance standards are maintained across all projects. Required Skills And Qualifications 68 years of hands-on experience in Data Science, Machine Learning, and Statistical Modeling. Proficiency in programming languages such as Python, R, and SQL. Strong foundation in data analysis, data wrangling, and feature engineering. Expertise in building and deploying models using tools such as scikit-learn, TensorFlow, PyTorch, or similar frameworks. Experience with big data platforms (e.g., Spark, Hadoop) and cloud services (AWS, GCP, Azure) is a plus. Deep understanding of statistical techniques including hypothesis testing, regression, and Bayesian methods. Excellent communication skills with the ability to explain complex technical concepts to non-technical audiences. Proven track record of working on cross-functional projects and delivering data-driven solutions that impact business outcomes. Master's or Ph.D. in Data Science, Computer Science, Statistics, Mathematics, or a related field. Experience with NLP, computer vision, or deep learning techniques. Knowledge of data engineering principles and ETL processes. Familiarity with version control (Git), agile methodologies, and CI/CD pipelines. Contributions to open-source data science projects or publications in relevant fields. (ref:hirist.tech) Show more Show less

Posted 1 day ago

Apply

9.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Job Description Having 9+ years of working experience in Data Engineering and Data Analytic projects in implementing Data Warehouse, Data Lake and Lakehouse and associated ETL/ELT patterns. Worked as a Data Modeller in one or two implementations in creating and implementing Data models and Data Base designs using Dimensional, ER models. Good knowledge and experience in modelling complex scenario's like many to many relationships, SCD types, Late arriving fact and dimensions etc. Hands on experience in any one of the Data modelling tools like Erwin, ER/Studio, Enterprise Architect or SQLDBM etc. Experience in working closely with Business stakeholders/Business Analyst to understand the functional requirements and translating it into Data Models and database designs. Experience in creating conceptual models and logical models and translating it into physical models to address the both functional and non functional requirements. Strong knowledge in SQL, able to write complex queries and profile the data to understand the relationships and DQ issues. Very strong understanding of database modelling and design principles like normalization, denormalization, isolation levels. Experience in Performance optimizations through database designs (Physical Modelling). Good communication skills (ref:hirist.tech) Show more Show less

Posted 1 day ago

Apply

Exploring ETL Jobs in India

The ETL (Extract, Transform, Load) job market in India is thriving with numerous opportunities for job seekers. ETL professionals play a crucial role in managing and analyzing data effectively for organizations across various industries. If you are considering a career in ETL, this article will provide you with valuable insights into the job market in India.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Pune
  4. Hyderabad
  5. Chennai

These cities are known for their thriving tech industries and often have a high demand for ETL professionals.

Average Salary Range

The average salary range for ETL professionals in India varies based on experience levels. Entry-level positions typically start at around ₹3-5 lakhs per annum, while experienced professionals can earn upwards of ₹10-15 lakhs per annum.

Career Path

In the ETL field, a typical career path may include roles such as: - Junior ETL Developer - ETL Developer - Senior ETL Developer - ETL Tech Lead - ETL Architect

As you gain experience and expertise, you can progress to higher-level roles within the ETL domain.

Related Skills

Alongside ETL, professionals in this field are often expected to have skills in: - SQL - Data Warehousing - Data Modeling - ETL Tools (e.g., Informatica, Talend) - Database Management Systems (e.g., Oracle, SQL Server)

Having a strong foundation in these related skills can enhance your capabilities as an ETL professional.

Interview Questions

Here are 25 interview questions that you may encounter in ETL job interviews:

  • What is ETL and why is it important? (basic)
  • Explain the difference between ETL and ELT processes. (medium)
  • How do you handle incremental loads in ETL processes? (medium)
  • What is a surrogate key in the context of ETL? (basic)
  • Can you explain the concept of data profiling in ETL? (medium)
  • How do you handle data quality issues in ETL processes? (medium)
  • What are some common ETL tools you have worked with? (basic)
  • Explain the difference between a full load and an incremental load. (basic)
  • How do you optimize ETL processes for performance? (medium)
  • Can you describe a challenging ETL project you worked on and how you overcame obstacles? (advanced)
  • What is the significance of data cleansing in ETL? (basic)
  • How do you ensure data security and compliance in ETL processes? (medium)
  • Have you worked with real-time data integration in ETL? If so, how did you approach it? (advanced)
  • What are the key components of an ETL architecture? (basic)
  • How do you handle data transformation requirements in ETL processes? (medium)
  • What are some best practices for ETL development? (medium)
  • Can you explain the concept of change data capture in ETL? (medium)
  • How do you troubleshoot ETL job failures? (medium)
  • What role does metadata play in ETL processes? (basic)
  • How do you handle complex transformations in ETL processes? (medium)
  • What is the importance of data lineage in ETL? (basic)
  • Have you worked with parallel processing in ETL? If so, explain your experience. (advanced)
  • How do you ensure data consistency across different ETL jobs? (medium)
  • Can you explain the concept of slowly changing dimensions in ETL? (medium)
  • How do you document ETL processes for knowledge sharing and future reference? (basic)

Closing Remarks

As you explore ETL jobs in India, remember to showcase your skills and expertise confidently during interviews. With the right preparation and a solid understanding of ETL concepts, you can embark on a rewarding career in this dynamic field. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies