Home
Jobs

10314 Etl Jobs - Page 38

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist In this role, you will: Ingestion and provisioning of raw datasets, enriched tables, and/or curated, re-usable data assets to enable Cybersecurity use cases. Driving improvements in the reliability and frequency of data ingestion including increasing real-time coverage. Support and enhancement of data ingestion infrastructure and pipelines. Designing and implementing data pipelines that will collect data from disparate sources across the enterprise, and from external sources, transport said data, and deliver it to our data platform. Extract Translate and Load (ETL) workflows, using both advanced data manipulation tools and programmatically manipulating data throughout our data flows, ensuring data is available at each stage in the data flow, and in the form needed for each system, service, and customer along said data flow. Identifying and onboarding data sources using existing schemas and, where required, conducting exploratory data analysis to investigate and determine new schemas Requirements To be successful in this role, you should meet the following requirements: Ability to script (Bash/PowerShell, Azure CLI), code (Python, C#, Java), query (SQL, Kusto query language) coupled with experience with software versioning control systems (e.g., GitHub) and CI/CD systems. Programming experience in the following languages: PowerShell, Terraform, Python Windows command prompt and object orientated programming languages Data Acquisition, Cloud-based Data Pipelines (Azure preferred) Data Transport and Data Cleaning Data Engineering pipeline automation, productionisation, and optimisation Technical knowledge and breadth of Azure technology services (Identity, Networking, Compute, Storage, Web, Containers, Databases) Cloud & Big Data Technologies such as Azure Cloud, Azure IAM, Azure Active Directory (Azure AD), Azure Data Factory, Azure Databricks, Azure Functions, Azure, Kubernetes, Service, Azure Logic App, Azure Monitor, Azure Log Analytics, Azure Compute, Azure Storage, Azure Data Lake Store, S3, Synapse Analytics and/or PowerBI www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Company Description ThreatXIntel is a startup cyber security company dedicated to protecting businesses and organizations from cyber threats. Our services include cloud security, web and mobile security testing, cloud security assessment, and DevSecOps. We provide customized, affordable solutions tailored to meet the specific needs of our clients, regardless of their size. Role Description We are seeking a freelance GCP Data Engineer with expertise in Scala , Apache Spark , Airflow , and experience with Automic and Laminar frameworks. The role focuses on designing and maintaining scalable data pipelines and workflow automation within the Google Cloud Platform ecosystem. Key Responsibilities Design, build, and optimize data pipelines using Scala and Apache Spark on Google Cloud Platform (GCP) Orchestrate ETL workflows using Apache Airflow Integrate and automate data processing using Automic job scheduling Utilize Laminar for reactive programming or stream processing within pipelines (if applicable) Collaborate with cross-functional teams to define data flows and transformations Ensure pipeline performance, scalability, and monitoring across environments Troubleshoot and resolve issues in batch and streaming data processes Required Skills Strong programming skills in Scala Hands-on experience with Apache Spark for distributed data processing Experience working with GCP data services (e.g., BigQuery, Cloud Storage, Dataflow preferred) Proficiency with Airflow for workflow orchestration Experience using Automic for job scheduling Familiarity with Laminar or similar frameworks for reactive or stream-based processing Good understanding of data engineering best practices and pipeline optimization Ability to work independently and communicate effectively with remote teams Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

India

On-site

Linkedin logo

Company Description ThreatXIntel is a startup cyber security company specializing in cloud security, web and mobile security testing, cloud security assessment, and DevSecOps. We offer customized, affordable solutions tailored to meet the specific needs of businesses of all sizes. Our proactive approach to security involves continuous monitoring and testing to identify vulnerabilities before they can be exploited. Role Description We are looking for a skilled freelance Data Engineer with expertise in PySpark and AWS data services , particularly S3 and Redshift . Familiarity with Salesforce data integration is a plus. This role focuses on building scalable data pipelines and supporting analytics use cases in a cloud-native environment. Key Responsibilities Design and develop ETL/ELT data pipelines using PySpark for large-scale data processing Ingest, transform, and store data across AWS S3 (data lake) and Amazon Redshift (data warehouse) Integrate data from Salesforce into the cloud data ecosystem for analysis Optimize data workflows for performance and cost-efficiency Write efficient code and queries for structured and unstructured data Collaborate with analysts and stakeholders to deliver clean, usable datasets Required Skills Strong hands-on experience with PySpark Proficient in AWS services, especially S3 and Redshift Basic working knowledge of Salesforce data structure or API Ability to write complex SQL for data transformation and reporting Familiarity with version control and Agile collaboration tools Good communication and documentation skills Show more Show less

Posted 3 days ago

Apply

1.0 - 3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Job Summary We are hiring a Data Analyst to turn complex data into actionable insights using Power BI, Tableau, QuickSight, and SQL. You’ll collaborate with teams across the organization to design dashboards, optimize data pipelines, and drive data literacy. This is an onsite role in Noida, ideal for problem-solvers passionate about data storytelling. Key Responsibilities Dashboard & Reporting: Develop interactive dashboards in Power BI, Tableau, and QuickSight with drill-down capabilities. Automate reports and ensure real-time data accuracy. Data Analysis & SQL: Write advanced SQL queries (window functions, query optimization) for large datasets. Perform root-cause analysis on data discrepancies. ETL & Data Pipelines: Clean, transform, and model data using ETL tools (e.g. Python scripts). Work with cloud data warehouses (Snowflake, Redshift, BigQuery). Stakeholder Collaboration: Translate business requirements into technical specs. Train non-technical teams on self-service analytics tools. Performance Optimization: Improve dashboard load times and SQL query efficiency. Implement data governance best practices. Technical Skills Must-Have: ✔ BI Tools: Power BI (DAX, Power Query), Tableau (LODs, parameters), QuickSight (SPICE, ML insights) ✔ SQL: Advanced querying, indexing, stored procedures ✔ Data Modeling: Star schema, normalization, performance tuning ✔ Excel/Sheets: PivotTables, VLOOKUP, Power Query Nice-to-Have: ☑ Programming: Python/R (Pandas, NumPy) for automation ☑ Cloud Platforms: AWS (QuickSight, S3), Azure (Synapse), GCP ☑ Version Control: Git, GitHub Soft Skills Strong communication to explain insights to non-technical teams. Curiosity to explore data anomalies and trends. Project management (Agile/Scrum familiarity is a plus). Qualifications Bachelor’s/Master’s in Data Science, Computer Science, or related field. 1 - 3 years in data analysis, with hands-on experience in Power BI/Tableau/QuickSight. Portfolio of dashboards or GitHub projects (preferred) Show more Show less

Posted 3 days ago

Apply

5.0 years

0 Lacs

India

Remote

Linkedin logo

About AdZeta AdZeta is a technology company that leverages AI-powered smart bidding technology to drive high LTV and profitability for e-commerce and D2C brands. We turn first-party data into predictive, value-based bidding and personalised customer journeys. As we expand our data practice, we’re searching for a Data Strategist who can translate complex datasets into clear business stories that drive measurable revenue lift. End-to-End Analytics > Build and maintain marketing dashboards in Tableau or Power BI that surface channel-level ROI, LTV, and incrementality. Data Engineering & ETL > Design ETL pipelines in Google Cloud (BigQuery) or comparable environments; automate data blends from GA4, Adobe Analytics, CRMs, and CDPs. Audience & Personalisation > Partner with media teams to create high-value audiences; inform campaign personalisation strategies using segmentation, propensity scoring, and AI models. Storytelling & Advisory > Turn raw numbers into board-ready insights; present findings that influence creative, media, and product road-maps. Experimentation & AI > Test and deploy AI/ML frameworks to predict churn, optimise bidding, and generate next-best-action recommendations. Required Skills & Experience 5+ years in marketing analytics, mar-tech, or data-driven consulting. Proficiency in SQL plus one BI tool (Tableau or Power BI). Hands-on experience with GA4, Adobe Analytics, and at least one CDP or audience platform. Fluency in ETL concepts and cloud data warehouses (BigQuery / GCP preferred). Strong data-storytelling chops—able to persuade both technical and non-technical stakeholders. Exposure to AI/ML concepts or tooling (Vertex AI, AutoML, or similar) is a big plus. Nice-to-Have Experience with campaign analytics for paid search, paid social, or programmatic. Familiarity with server-side tagging, GTM, or Cloud Functions. Previous work in e-commerce, D2C, or subscription businesses. Why AdZeta Remote-first & async-friendly culture with flexible PTO. Ownership: competitive salary + equity option pool. Annual learning stipend for certs, conferences, or AI experimentation. Direct line of sight to C-suite; your insights shape product and go-to-market road-maps. Application Process Apply via LinkedIn with a short note on a recent analytics project you loved. 30-min intro call with People team. Data deep-dive & whiteboard session with Analytics Lead. Final culture-fit chat with Founder & CEO. Show more Show less

Posted 3 days ago

Apply

3.0 years

0 Lacs

India

Remote

Linkedin logo

Title: Data Engineer Location: Remote Employment type: Full Time with BayOne We’re looking for a skilled and motivated Data Engineer to join our growing team and help us build scalable data pipelines, optimize data platforms, and enable real-time analytics. What You'll Do Design, develop, and maintain robust data pipelines using tools like Databricks, PySpark, SQL, Fabric, and Azure Data Factory Collaborate with data scientists, analysts, and business teams to ensure data is accessible, clean, and actionable Work on modern data lakehouse architectures and contribute to data governance and quality frameworks Tech Stack Azure | Databricks | PySpark | SQL What We’re Looking For 3+ years experience in data engineering or analytics engineering Hands-on with cloud data platforms and large-scale data processing Strong problem-solving mindset and a passion for clean, efficient data design Job Description: Min 3 years of experience in modern data engineering/data warehousing/data lakes technologies on cloud platforms like Azure, AWS, GCP, Data Bricks etc. Azure experience is preferred over other cloud platforms. 5 years of proven experience with SQL, schema design and dimensional data modelling Solid knowledge of data warehouse best practices, development standards and methodologies Experience with ETL/ELT tools like ADF, Informatica, Talend etc., and data warehousing technologies like Azure Synapse, Microsoft Fabric, Azure SQL, Amazon redshift, Snowflake, Google Big Query etc. Strong experience with big data tools (Databricks, Spark etc..) and programming skills in PySpark and Spark SQL. Be an independent self-learner with “let’s get this done” approach and ability to work in Fast paced and Dynamic environment. Excellent communication and teamwork abilities. Nice-to-Have Skills: Event Hub, IOT Hub, Azure Stream Analytics, Azure Analysis Service, Cosmo DB knowledge. SAP ECC /S/4 and Hana knowledge. Intermediate knowledge on Power BI Azure DevOps and CI/CD deployments, Cloud migration methodologies and processes BayOne is an Equal Opportunity Employer and does not discriminate against any employee or applicant for employment because of race, color, sex, age, religion, sexual orientation, gender identity, status as a veteran, and basis of disability or any federal, state, or local protected class. This job posting represents the general duties and requirements necessary to perform this position and is not an exhaustive statement of all responsibilities, duties, and skills required. Management reserves the right to revise or alter this job description. Show more Show less

Posted 3 days ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Positions Title : Data Engineer Experience Range : 4+ Years Location : Hyderabad and Gurgaon (Hybrid) Notice Period : Immediate to 15 days Primary Skills Required: Kafka, Spark, Python, SQL, Shell Scripting, Databricks, Snowlflake, AWS and Azure Cloud What you will do: 1. Provide Expertise and Guidance as a Senior Experienced Engineer in solution design and strategy for Data Lake and analytics-oriented Data Operations. 2. Design, develop, and implement end-to-end data solutions (storage, integration, processing, access) in hypervendor platforms like AWS and Azure. 3. Architect and implement Integration, ETL, and data movement solutions using SQL Server Integration Services (SSIS)/ C#, AWS Glue, MSK and/or Confluent, and other COTS technologies. 4. Prepare documentation and designs for data solutions and applications. 5. Design and implement distributed analytics platforms for analyst teams. 6. Design and implement streaming solutions using Snowflake, Kafka and Confluent. 7. Migrate data from traditional relational database systems (ex. SQL Server, Postgres) to AWS relational databases such as Amazon RDS, Aurora, Redshift, DynamoDB, Cloudera, Snowflake, Databricks, etc. Who you are: 1. Bachelor's degree in Computer Science, Software Engineering. 2. 4+ Years of experience in the Data domain as an Engineer and Architect. 3. Demonstrated sense of ownership and accountability in delivering high-quality data solutions independently or with minimal handholding. 4. Ability to thrive in a dynamic environment, adapting to evolving requirements and challenges. 5. A solid understanding of AWS and Azure storage solutions such as S3, EFS, and EBS. 6. A solid understanding of AWS and Azure compute solutions such as EC2. 7. Experience implementing solutions on AWS and Azure relational databases such as MSSQL, SSIS, Amazon Redshift, RDS, and Aurora. 8. Experience implementing solutions leveraging ElastiCache and DynamoDB. 9. Experience designing and implementing Enterprise Data Warehouse, Data Marts/Lakes. 10. Experience with Star or Snowflake Schema. 11. Experience with R or Python and other emerging technologies in D&A. 12. Understanding of Slowly Changing Dimensions and Data Vault Model. AWS and Azure Certifications are preferred Show more Show less

Posted 3 days ago

Apply

3.0 - 8.0 years

12 - 20 Lacs

Noida, Gurugram, Mumbai (All Areas)

Work from Office

Naukri logo

3+ years of experience in data engineering or backend development with a focus on highly scalable data systems Experience B2B SaaS AI company ideally in a high-growth or startup designing and scaling cloud-based data platforms (AWS, GCP, Azure).

Posted 3 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Company Overview Cyanous Software Private Limited is a leading global information technology, consulting, and business process services company. We drive digital transformation by harnessing the power of cognitive computing, hyper-automation, robotics, cloud, analytics, and emerging technologies. Our mission is to empower every person and organization to achieve more by delivering scalable, intelligent, and high-impact technology solutions worldwide. Position Summary We are seeking an experienced Data Engineer to join our dynamic team. This is a full-time, on-site role located in Chennai or Pune. The ideal candidate will be responsible for building robust, scalable data pipelines and supporting advanced analytics initiatives. You will collaborate with cross-functional teams to enable data-driven decision-making across the organization. Key Responsibilities Design, develop, and maintain scalable data pipelines using Snowflake, Azure Data Factory, and other ETL tools. Manage end-to-end data flows from diverse sources, ensuring high data quality and availability. Build and optimize data models for analytics and reporting needs. Analyze complex datasets to extract actionable insights and support business intelligence initiatives. Collaborate with data scientists, analysts, and other engineering teams to integrate data solutions into the overall architecture. Ensure best practices in data governance, privacy, and security are followed throughout data lifecycle. Required Skills & Qualifications Strong hands-on experience in Data Engineering, including ETL development, data modeling, and data warehousing. Proficiency with Snowflake and Azure Data Factory is mandatory. Solid programming skills in Python, SQL, and preferably R. Experience working with complex and large-scale data sets in enterprise environments. Knowledge of cloud-based data architectures and modern data platforms. Excellent analytical, problem-solving, and communication skills. Bachelor's or Master's degree in Computer Science, Data Science, or a related technical field. Nice To Have Experience with DevOps practices in data engineering workflows. Familiarity with other cloud services (AWS/GCP), CI/CD pipelines, and data orchestration tools. (ref:hirist.tech) Show more Show less

Posted 3 days ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Company Description Cyanous is a leading global information technology, consulting, and business process services company. Our mission is to empower every individual and organization to achieve more and adapt to the digital world. We leverage cognitive computing, hyper-automation, robotics, cloud, analytics, and emerging technologies to drive transformation and success for our clients. Dedicated to addressing global challenges, we collaborate with employees, clients, partners, public institutions, and community organizations globally. We Are Description This is a full-time role for a Big Data Developer based on-site in Chennai. Responsibilities The Big Data Developer will be responsible for designing, developing, and managing data processing systems. This includes working on data integration, Extract Transform Load (ETL) processes, and ensuring data accuracy and integrity. The role also involves collaborating with cross-functional teams to deliver analytics solutions and continuously improve existing data : Proficiency in Data Engineering, Big Data technologies. Experience with Extract Transform Load (ETL) processes and Data Warehousing. Strong background in Software Development. Excellent problem-solving and analytical skills. Ability to work collaboratively with cross-functional teams. Bachelor's degree in Computer Science, Information Technology, or a related field. Experience in the IT consulting industry is a Have : Minimum 8 years of experience in Spark, Scala, and Big Data with exposure about cloud platforms ( AWS, Azure, GCP) for big data processing and storage. Strong experience in Azure DLS. Strong experience in Databricks, data pipelines. Experience in Hadoop. Seeking someone with strong backend development expertise, particularly in Java(Spring to Have : Agile delivery experience. (ref:hirist.tech) Show more Show less

Posted 3 days ago

Apply

5.0 years

0 Lacs

Udaipur, Tripura, India

On-site

Linkedin logo

About Arcgate is a dynamic and rapidly growing team of 2500+ professionals passionate about data and technology. We deliver cutting-edge solutions to some of the worlds most innovative startups to market leaders across application development, quality engineering, AI data preparation, data enrichment, search relevance, and : Design, build, and optimize Python based data pipelines that handle large, complex, and messy datasets efficiently. Develop and manage scalable data infrastructures, including databases and data warehouses such as Snowflake, Azure Data Factory etc. ensuring reliability and performance. Build, maintain, and optimize CDC processes that integrate data from multiple sources into the data warehouse. Collaborate closely with data scientists, analysts, and operations teams to gather requirements and deliver high-quality data solutions. Perform data quality checks, validation, and verification to ensure data integrity and consistency. Support and optimize data flows, ingestion, transformation, and publishing across various systems. Work with AWS infrastructure (ECS, RDS, S3), manage deployments using Docker, and package services into containers. Use tools like Prefect, Dagster and dbt to orchestrate and transform data workflows. Implement CI/CD pipelines using Harness and GitHub Actions. Monitor system health and performance using DataDog. Manage infrastructure orchestration with Terraform and Terragrunt. Stay current with industry trends, emerging tools, and best practices in data engineering. Coach and mentor junior team members, promoting best practices and skill development. Contribute across diverse projects, demonstrating flexibility : Bachelors degree in Computer Science, Engineering, Mathematics, Physics, or a related field. 5+ years of demonstrable experience building reliable, scalable data pipelines in production environments. Strong experience with Python, SQL programming, and data architecture. Hands-on experience with data modeling in Data Lake or Data Warehouse environments (Snowflake preferred). Familiarity with Prefect, Dagster, dbt, and ETL/ELT pipeline frameworks. Experience with AWS services (ECS, RDS, S3) and containerization using Docker. Knowledge of TypeScript, React, Node.js is a plus for collaborating on the application platform. Strong command of GitHub for source control and Jira for change management. Strong analytical and problem-solving skills, with a hands-on mindset for wrangling data and solving complex challenges. Excellent communication and collaboration skills; ability to work effectively with crossfunctional teams. A proactive, start-up mindset, adaptable, ambitious, responsible, and ready to contribute wherever needed. Passion for delivering high-quality solutions with meticulous attention to detail. Enjoy working in an inclusive, respectful, and highly collaborative environment where every voice : Competitive salary package. Opportunities for growth, learning, and professional development. Dynamic, collaborative, and innovation-driven work culture. (ref:hirist.tech) Show more Show less

Posted 3 days ago

Apply

10.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Job Description Responsible for the successful delivery and closure of multiple projects. Facilitates team activities, including daily stand-up meetings, grooming, sprint planning, demonstrations, release planning, and team retrospectives. Ensure team is aware of tasks and delivery dates. Developing project scopes and objectives, involving all relevant stakeholders and ensuring technical feasibility. To identify scope creep to ensure projects are on track, as well as judge commercial viability and actionable steps. Led sprint planning sessions and periodic conference calls with clients and senior team members to agree on the prioritization of projects and tasks. Be a central point of contact, and responsible for the projects handled and provide transparency & collaboration with different teams To represent the teams needs and requirements to the client to ensure timelines and quality delivery are practically achievable. Build a trusting and safe environment where problems can be raised and resolved. Understanding clients business and processes to provide effective solutions as a technology consultant. Report and escalate to management as needed. Quick learner and implementor of learning path for the Have : 10 to 15 Years of total experience in the software development industry with a minimum of 5 years as a Technical Project Manager. Must have hands-on development experience in backend (LAMP stack or .NET) & frontend (ReactJS, NextJS, AngularJS) and managing large-scale projects. Must have experience in managing new development projects with at least 8 to 10 people team with a duration of 6+ months (excluding ongoing support and maintenance projects/tasks), developing the project & release plan, adhering to the standard processes of the organization. Excellent verbal, and written communication skills with both technical and non-technical customers Strong understanding of architecture, design, and implementation of technical solutions. Extremely fluent in REST/SOAP APIs with JSON/XML. Experience in ETL is a plus. A good understanding of N-tier and Microservice architecture. Well-versed in Agile development methodology, and all its ceremonies. Excellent problem-solving/troubleshooting skills, particularly about anticipating and solving problems, issues, risks, or concerns before they become critical Manage the customer relationship during delivery and serve as the primary interface with the customer Prepare a clear and effective communications plan, and ensure proactive communication of all relevant information to the customer and to all stakeholders Experience in creating Wireframes and/or Presentation to effectively convey technology solutions to To Have : Assess and work with the sales team to create and review proposals, and contracts delivered to determine a proper project plan Experience in Cloud Computing AWS and/or Azure Experience in mentoring, coaching, and developing rising talent in the newer technology. (ref:hirist.tech) Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Selected Intern's Day-to-day Responsibilities Include Conduct market research to identify selling possibilities and evaluate customer needs. Assist in developing and executing sales strategies to meet organizational goals. Generate leads through cold calling, networking, and social media. Participate in promotional campaigns, product launches, and marketing events. Collaborate with the marketing team to develop brand awareness strategies. Maintain relationships with clients by providing support, information, and guidance. Prepare and deliver appropriate presentations on products and services. Achieve sales targets and outcomes within the schedule. Maintain reports and records of sales and client interactions. About Company: We are experts in modern data stacks and cloud-native architectures. Our engineers are well-versed in multiple ETL tools and programming languages required to build robust data pipelines. Our data architects and data modelers are experienced in building data warehouses, data marts, and industry-specific data models. Our BI experts have experience across a wide range of reporting and BI tools like Tableau, Power BI, Looker, Metabase, Domo, etc. Our data scientists have deep experience in building large-scale models and deploying and running them in production. Show more Show less

Posted 3 days ago

Apply

5.0 - 10.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Key Responsibilities & Skillsets: 5-10 years of relevant DE experience Azure Databricks– ability to create data transformation logics Strong programming skills in Python and experience with SQL. Should be able to write complex SQL, Transact SQL, Stored Procedures ETL tool – Azure data factory, Data Bricks Experience with Data Modelling DWH - Snowflake Excellent communication skills and stakeholder management Ability to work independently in IC role Good to have Knowledge on Snowflake platform Knowledge on PowerBI Familiarity with CI/CD practices and version control (e.g., Git) Familiarity with Azure DevOps Show more Show less

Posted 3 days ago

Apply

6.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

About The Role Grade Level (for internal use): 10 Position Summary Our proprietary software-as-a-service helps automotive dealerships and sales teams better understand and predict exactly which customers are ready to buy, the reasons why, and the key offers and incentives most likely to close the sale. Its micro-marketing engine then delivers the right message at the right time to those customers, ensuring higher conversion rates and a stronger ROI. What You'll Do You will be part of our Data Platform & Product Insights data engineering team. As part of this agile team, you will work in our cloud native environment to Build & support data ingestion and processing pipelines in cloud. This will entail extraction, load and transformation of ‘big data’ from a wide variety of sources, both batch & streaming, using latest data frameworks and technologies Partner with product team to assemble large, complex data sets that meet functional and non-functional business requirements, ensure build out of Data Dictionaries/Data Catalogue and detailed documentation and knowledge around these data assets, metrics and KPIs. Warehouse this data, build data marts, data aggregations, metrics, KPIs, business logic that leads to actionable insights into our product efficacy, marketing platform, customer behaviour, retention etc. Build real-time monitoring dashboards and alerting systems. Coach and mentor other team members. Who You Are 6+ years of experience in Big Data and Data Engineering. Strong knowledge of advanced SQL, data warehousing concepts and DataMart designing. Have strong programming skills in SQL, Python/ PySpark etc. Experience in design and development of data pipeline, ETL/ELT process on-premises/cloud. Experience in one of the Cloud providers – GCP, Azure, AWS. Experience with relational SQL and NoSQL databases, including Postgres and MongoDB. Experience workflow management tools: Airflow, AWS data pipeline, Google Cloud Composer etc. Experience with Distributed Versioning Control environments such as GIT, Azure DevOps Building Docker images and fetch/promote and deploy to Production. Integrate Docker container orchestration framework using Kubernetes by creating pods, config Maps, deployments using terraform. Should be able to convert business queries into technical documentation. Strong problem solving and communication skills. Bachelors or an advanced degree in Computer Science or related engineering discipline. Good to have some exposure to Exposure to any Business Intelligence (BI) tools like Tableau, Dundas, Power BI etc. Agile software development methodologies. Working in multi-functional, multi-location teams Grade: 10 Location: Gurugram Hybrid Model: twice a week work from office Shift Time: 12 pm to 9 pm IST What You'll Love About Us – Do ask us about these! Total Rewards. Monetary, beneficial and developmental rewards! Work Life Balance. You can't do a good job if your job is all you do! Prepare for the Future. Academy – we are all learners; we are all teachers! Employee Assistance Program. Confidential and Professional Counselling and Consulting. Diversity & Inclusion. HeForShe! Internal Mobility. Grow with us! About AutomotiveMastermind Who we are: Founded in 2012, automotiveMastermind is a leading provider of predictive analytics and marketing automation solutions for the automotive industry and believes that technology can transform data, revealing key customer insights to accurately predict automotive sales. Through its proprietary automated sales and marketing platform, Mastermind, the company empowers dealers to close more deals by predicting future buyers and consistently marketing to them. automotiveMastermind is headquartered in New York City. For more information, visit automotivemastermind.com. At automotiveMastermind, we thrive on high energy at high speed. We’re an organization in hyper-growth mode and have a fast-paced culture to match. Our highly engaged teams feel passionately about both our product and our people. This passion is what continues to motivate and challenge our teams to be best-in-class. Our cultural values of “Drive” and “Help” have been at the core of what we do, and how we have built our culture through the years. This cultural framework inspires a passion for success while collaborating to win. What We Do Through our proprietary automated sales and marketing platform, Mastermind, we empower dealers to close more deals by predicting future buyers and consistently marketing to them. In short, we help automotive dealerships generate success in their loyalty, service, and conquest portfolios through a combination of turnkey predictive analytics, proactive marketing, and dedicated consultative services. What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Show more Show less

Posted 3 days ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

Job Description Are you ready to make an impact at DTCC? Do you want to work on innovative projects, collaborate with a dynamic and supportive team, and receive investment in your professional development? At DTCC, we are at the forefront of innovation in the financial markets. We're committed to helping our employees grow and succeed. We believe that you have the skills and drive to make a real impact. We foster a thriving internal community and are committed to creating a workplace that looks like the world that we serve. Pay And Benefits Competitive compensation, including base pay and annual incentive Comprehensive health and life insurance and well-being benefits Pension Paid Time Off and Personal/Family Care, and other leaves of absence when needed to support your physical, financial, and emotional well-being. DTCC offers a flexible/hybrid model of 3 days onsite and 2 days remote (onsite Tuesdays, Wednesdays and a third day unique to each team or employee). The Impact You Will Have In This Role The Development family is responsible for crafting, designing, deploying, and supporting applications, programs, and software solutions. May include research, new development, prototyping, modification, reuse, re-engineering, maintenance, or any other activities related to software products used internally or externally on product platforms supported by the firm. The software development process requires in-depth domain expertise in existing and emerging development methodologies, tools, and programming languages. Software Developers work closely with business partners and / or external clients in defining requirements and implementing solutions. The Application Support Engineering role specializes in maintaining and providing technical support for all applications that are beyond the development stage and are running in the daily operations of the firm. Works closely with development teams, infrastructure partners, and internal / external clients to bring up and resolve technical support incidents. Your Primary Responsibilities Verify analysis performed by team members and implement changes required to prevent reoccurance of incidents Resolve Critical application alerts in a timely fashion including production defects, providing business impact and analysis to teams, handling minor enhancements as needed Review and update knowledge articles and runbooks with application development teams to confirm information is up to date Collaborate with internal teams to provide answers to application issues and escalate to as needed Validate and submit responses to requests for information from onging audits NOTE: The Primary Responsibilities of this role are not limited to the details above. Qualifications Minimum 8+ years of related experience Bachelor's degree (preferred) or equivalent experience Talents Needed For Success Minimum of 8+ years of related experience in Application Support. Bachelor's degree preferred or equivalent experience. Solid Experience in Application Support. Hands on experience in Unix, Linux, Windows, SQL/PLSQL Familiarity working with relational databases (DB2, Oracle, Snowflake) Monitoring and Data Tools experience (Splunk, DynaTrace, Thousand Eyes, Grafana, Selenium, HiPam IBM Zolda) Cloud Technologies (AWS services (S3, EC2, Lambda, SQS, IAM roles), Azure, OpenShift, RDS Aurora, Postgress) Scheduling Tool experience (CA AutoSys, Control-M) Scripting languages (Bash, Python, Ruby, Shell, Perl, JavaScript) Hands on experience with ETL tools (Informatica Datahub/IDQ, Talend) Strong problem-solving skills with the ability to think creatively. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. Show more Show less

Posted 3 days ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Senior Consultant Specialist In this role, you will be: Provide expert technical guidance and solutions to the POD for complex business problems Design, develop, and implement technical solutions, ensuring they meet business requirements and are scalable and maintainable Troubleshoot and resolve escalated technical issues instantly. Experience in providing risk assessment for new functionality and enhancements As an ITSO (IT Service Owner), complete BOW tasks within the timelines and ensure that your application services are vulnerability, ICE, resiliency, and contingency testing compliant. As an ITSO, ensure that application have an effective escalation and support framework in place for all IT production Incidents and one that shall meet the agreed operational and service level agreements of business Accountable for leading the POD Sound Knowledge of corporate finance experience exhibiting knowledge of Interest rate risk in the banking book Experience with Agile delivery methodologies (JIRA, Scrum, FDD, SAFe) Experience with DevOps tools (Jenkins, Ansible, Git) Requirements To be successful in this role, you should meet the following requirements: Graduation in technology (B.E, B.Tech & Above) with 10+ years of IT experience. Strong knowledge on Pentaho ETL tool with map reduce build knowledge Writing complex SQL queries Good knowledge on Shell scripting, Python, Java Exposure to Hadoop and Bigdata is plus Infrastructure as Code & CICD – Git, Ansible, Jenkins Having experience in working in Agile/DevOps env. Monitoring, Alerting, Incident Tracking, Reporting, etc. Good understanding of Google cloud and latest tools/technologies exposure will be add-on. HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India Show more Show less

Posted 3 days ago

Apply

3.0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

Linkedin logo

Overview Job Description Leading AI-driven Global Supply Chain Solutions Software Product Company and one of Glassdoor’s “Best Places to Work.” Seeking an astute individual that has a strong technical foundation with ability to be hands-on on developing/building automation to improve efficiency, productivity, and customer experience. Deep knowledge of industry best practices, with the ability to implement them working with larger team cloud, support, and the product teams. Scope We are seeking a highly skilled AI/Prompt Engineer to design, implement, and maintain artificial intelligence (AI) and machine learning (ML) solutions for our organization. The ideal candidate will have a deep understanding of AI and ML technologies, as well as experience with data analysis, software development, and cloud computing. Primary Responsibilities Design and implement AI/ conversational AI solutions and ML solutions to solve business problems and to improve customer experience and operational efficiency. Develop and maintain machine learning models using tools such as TensorFlow, Keras, and PyTorch Collaborate with cross-functional teams to identify opportunities for AI and ML solutions and develop prototypes and proof-of-concepts. Develop and maintain data pipelines and ETL processes to support AI and ML workflows. Monitor and optimize model performance, accuracy, and scalability Stay up to date with emerging AI and ML technologies and evaluate their potential impact on our organization. Develop and maintain technical documentation, including architecture diagrams, design documents, and standard operating procedures Provide technical guidance and mentorship to other members of the data engineering and software development teams. Develop and maintain chatbots and voice assistants using tools such as Dialogflow, Amazon Lex, and Microsoft Bot Framework Develop and maintain integrations with third-party systems and APIs to support conversational AI workflows. Develop and maintain technical documentation, including architecture diagrams, design documents, and standard operating procedures. Provide technical guidance and mentorship to other team members. What We Are Looking For Bachelor’s degree in computer science, Information Technology, or a related field with 3+ years of experience in conversational AI engineering, design, and implementation Strong understanding of NLP technologies, including intent recognition, entity extraction, and sentiment analysis Experience with software development, including proficiency in Python and familiarity with software development best practices and tools (Git, Agile methodologies, etc.) Familiarity with cloud computing platforms (AWS, Azure, Google Cloud) and related services (S3, EC2, Lambda, etc.) Experience with big data technologies (Hadoop, Spark, etc.) Experience with containerization (Docker, Kubernetes) Experience with data visualization tools (Tableau, Power BI, etc.) Experience with reinforcement learning and/or generative models. Experience with machine learning technologies and frameworks (TensorFlow, Keras, etc.) Experience with big data technologies (Hadoop, Spark, etc.) Strong communication and collaboration skills Strong attention to detail and ability to prioritize tasks effectively. Strong problem-solving and analytical skills Ability to work independently and as part of a team. Strong attention to detail and ability to prioritize tasks effectively. Experience working with cloud platforms like AWS, Google Cloud, or Azure. Knowledge of big data technologies such as Apache Spark, Hadoop, or Kafka is a plus. Strong problem-solving and analytical skills. Ability to work in an agile and fast-paced development environment. Our Values If you want to know the heart of a company, take a look at their values. Ours unite us. They are what drive our success – and the success of our customers. Does your heart beat like ours? Find out here: Core Values Diversity, Inclusion, Value & Equity (DIVE) is our strategy for fostering an inclusive environment we can be proud of. Check out Blue Yonder's inaugural Diversity Report which outlines our commitment to change, and our video celebrating the differences in all of us in the words of some of our associates from around the world. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. Show more Show less

Posted 3 days ago

Apply

7.0 - 12.0 years

0 Lacs

Kochi

Work from Office

Naukri logo

Greetings from TCS Recruitment Team! Role: BI ARCHITECT Years of experience: 7 to 18 Years Walk-In-Drive Location: Kochi Walk-in-Location Details: Tata Consultancy Services TCS Centre SEZ Unit, Infopark Kochi Phase 1, Infopark Kochi P.O, Kakkanad, Kochi - 682042, Kerala India Drive Time: 9 am to 1:00 PM Date: 21-Jun-25 Must Have 7-15 years of hands-on experience in Business Intelligence tools (Tableau / Power BI / Qlik Sense). Strong data modelling and dashboard development skills. Proficiency in SQL and experience working with large datasets. Ability to communicate effectively with both technical and non-technical stakeholders. Experience with data warehousing and ETL processes is a plus. Preferred Qualifications: Certification in Tableau, Power BI, ThoughtSpot or Qlik Sense.

Posted 3 days ago

Apply

3.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

About Gartner IT Join a world-class team of skilled engineers who build creative digital solutions to support our colleagues and clients. We make a broad organizational impact by delivering cutting-edge technology solutions that power Gartner. Gartner IT values its culture of nonstop innovation, an outcome-driven approach to success, and the notion that great ideas can come from anyone on the team. About The Role Senior Data engineer for production support who will provide daily end-to-end support for daily data loads & manage production issues. What Will You Do Monitor & support various data loads for our Enterprise Data Warehouse. Support business users who are accessing POWER BI dashboards & Datawarehouse tables. Handle incidents, service requests within defined SLA’s. Work with team on managing Azure resources including but not limited to Databricks, Azure Data Factory pipelines, ADLS etc. Build new ETL/ELT pipelines using Azure Data Products like Azure Data Factory, Databricks etc. Help build best practices & processes. Coordinate with upstream/downstream teams to resolve data issues. Work with the QA team and Dev team to ensure appropriate automated regressions are added to detect such issues in future. Work with the Dev team to improve automated error handling so manual interventions can be reduced. Analyze process and pattern so other similar unreported issues can be resolved in one go. What You Will Need Strong IT professional with 3-4 years of experience in Data Engineering. The candidate should have strong analytical and problem-solving skills. Must Have 3-4 years of experience in Data warehouse design & development and ETL using Azure Data Factory (ADF) Experience in writing complex TSQL procedures on MPP platforms - Synapse, Snowflake etc. Experience in analyzing complex code to troubleshoot failure and where applicable recommend best practices around error handling, performance tuning etc. Ability to work independently, as well as part of a team and experience working with fast-paced operations/dev teams. Good understanding of business process and analyzing underlying data Understanding of dimensional and relational modelling Detailed oriented, with the ability to plan, prioritize, and meet deadlines in a fast-paced environment. Can be added to SDE Knowledge of Azure cloud technologies Exceptional problem-solving skills Nice To Have Experience crafting, building, and deploying applications in a DevOps environment utilizing CI/CD tools Understanding of dimensional and relational modeling Relevant certifications Basic knowledge of Power BI. Who Are You Bachelor’s degree or foreign equivalent degree in Computer Science or a related field required Excellent communication skills. Able to work independently or within a team proactively in a fast-paced AGILE-SCRUM environment. Owns success – Takes responsibility for the successful delivery of the solutions. Strong desire to improve upon their skills in tools and technologies Don’t meet every single requirement? We encourage you to apply anyway. You might just be the right candidate for this, or other roles. Who are we? At Gartner, Inc. (NYSE:IT), we guide the leaders who shape the world. Our mission relies on expert analysis and bold ideas to deliver actionable, objective insight, helping enterprise leaders and their teams succeed with their mission-critical priorities. Since our founding in 1979, we’ve grown to more than 21,000 associates globally who support ~14,000 client enterprises in ~90 countries and territories. We do important, interesting and substantive work that matters. That’s why we hire associates with the intellectual curiosity, energy and drive to want to make a difference. The bar is unapologetically high. So is the impact you can have here. What makes Gartner a great place to work? Our sustained success creates limitless opportunities for you to grow professionally and flourish personally. We have a vast, virtually untapped market potential ahead of us, providing you with an exciting trajectory long into the future. How far you go is driven by your passion and performance. We hire remarkable people who collaborate and win as a team. Together, our singular, unifying goal is to deliver results for our clients. Our teams are inclusive and composed of individuals from different geographies, cultures, religions, ethnicities, races, genders, sexual orientations, abilities and generations. We invest in great leaders who bring out the best in you and the company, enabling us to multiply our impact and results. This is why, year after year, we are recognized worldwide as a great place to work . What do we offer? Gartner offers world-class benefits, highly competitive compensation and disproportionate rewards for top performers. In our hybrid work environment, we provide the flexibility and support for you to thrive — working virtually when it's productive to do so and getting together with colleagues in a vibrant community that is purposeful, engaging and inspiring. Ready to grow your career with Gartner? Join us. The policy of Gartner is to provide equal employment opportunities to all applicants and employees without regard to race, color, creed, religion, sex, sexual orientation, gender identity, marital status, citizenship status, age, national origin, ancestry, disability, veteran status, or any other legally protected status and to seek to advance the principles of equal employment opportunity. Gartner is committed to being an Equal Opportunity Employer and offers opportunities to all job seekers, including job seekers with disabilities. If you are a qualified individual with a disability or a disabled veteran, you may request a reasonable accommodation if you are unable or limited in your ability to use or access the Company’s career webpage as a result of your disability. You may request reasonable accommodations by calling Human Resources at +1 (203) 964-0096 or by sending an email to ApplicantAccommodations@gartner.com . Job Requisition ID:99740 By submitting your information and application, you confirm that you have read and agree to the country or regional recruitment notice linked below applicable to your place of residence. Gartner Applicant Privacy Link: https://jobs.gartner.com/applicant-privacy-policy For efficient navigation through the application, please only use the back button within the application, not the back arrow within your browser. Show more Show less

Posted 3 days ago

Apply

10.0 years

0 Lacs

Itanagar, Arunachal Pradesh, India

Remote

Linkedin logo

This job is with Hitachi Digital Services, an inclusive employer and a member of myGwork – the largest global platform for the LGBTQ+ business community. Please do not contact the recruiter directly. Our Company We're Hitachi Digital, a company at the forefront of digital transformation and the fastest growing division of Hitachi Group. We're crucial to the company's strategy and ambition to become a premier global player in the massive and fast-moving digital transformation market. Our group companies, including GlobalLogic, Hitachi Digital Services, Hitachi Vantara and more, offer comprehensive services that span the entire digital lifecycle, from initial idea to full-scale operation and the infrastructure to run it on. Hitachi Digital represents One Hitachi, integrating domain knowledge and digital capabilities, and harnessing the power of the entire portfolio of services, technologies, and partnerships, to accelerate synergy creation and make real-world impact for our customers and society as a whole. Imagine the sheer breadth of talent it takes to unleash a digital future. We don't expect you to 'fit' every requirement - your life experience, character, perspective, and passion for achieving great things in the world are equally as important to us. Preferred job location: Bengaluru, Hyderabad, Pune, New Delhi or Remote The team Hitachi Digital is a leader in digital transformation, leveraging advanced AI and data technologies to drive innovation and efficiency across various operational companies (OpCos) and departments. We are seeking a highly experienced Lead Data Engineer to join our dynamic team and contribute to the development of robust data solutions and applications. The role Lead the design, development, and implementation of data engineering solutions with a focus on Google BigQuery. Develop and optimize complex SQL queries and data pipelines in BigQuery. Implement and integrate VectorAI and Agent Workspace for Google Gemini into data solutions. Lead the development of high-performance data ingestion processes using modern ETL/ELT practices. Collaborate with engineers to establish best practices for data system creation, ensuring data quality, integrity, and proper documentation. Continuously improve reporting and analysis by automating processes and streamlining workflows. Conduct research and stay updated on the latest advancements in data engineering and technologies. Troubleshoot and resolve complex issues related to data systems and applications. Document development processes, methodologies, and best practices. Mentor junior developers and participate in code reviews, providing constructive feedback to team members. Provide strategic direction and leadership in data engineering and technology adoption. What You'll Bring Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field. 10+ years of experience in data technologies. 5+ years of extensive experience in migrating data workloads to BigQuery on GCP. Strong programming skills in languages such as Python, Java, or SQL. Technical proficiency in BigQuery and other related tools on GCP. GCP Certifications in the data space. Knowledge of cloud platforms, particularly Google Cloud Platform (GCP). Experience with VectorAI and Agent Workspace for Google Gemini. Excellent problem-solving skills and the ability to work independently and as part of a team. Strong communication skills and the ability to convey complex technical concepts to non-technical stakeholders. Proven leadership skills and experience in guiding development projects from conception to deployment. Preferred Qualifications: Familiarity with data engineering tools and techniques. Previous experience in a similar role within a tech-driven company. About Us We're a global, 1000-strong diverse team of professional experts, promoting and delivering Social Innovation through our One Hitachi initiative (OT x IT x Product) and working on projects that have a real-world impact. We're curious, passionate and empowered, blending our legacy of 110 years of innovation with our shaping our future. Here you're not just another employee; you're part of a tradition of excellence and a community working towards creating a digital future. Championing diversity, equity, and inclusion Diversity, equity, and inclusion (DEI) are integral to our culture and identity. Diverse thinking, a commitment to allyship, and a culture of empowerment help us achieve powerful results. We want you to be you, with all the ideas, lived experience, and fresh perspective that brings. We support your uniqueness and encourage people from all backgrounds to apply and realize their full potential as part of our team. How We Look After You We help take care of your today and tomorrow with industry-leading benefits, support, and services that look after your holistic health and wellbeing. We're also champions of life balance and offer flexible arrangements that work for you (role and location dependent). We're always looking for new ways of working that bring out our best, which leads to unexpected ideas. So here, you'll experience a sense of belonging, and discover autonomy, freedom, and ownership as you work alongside talented people you enjoy sharing knowledge with. We're proud to say we're an equal opportunity employer and welcome all applicants for employment without attention to race, colour, religion, sex, sexual orientation, gender identity, national origin, veteran, age, disability status or any other protected characteristic. Should you need reasonable accommodations during the recruitment process, please let us know so that we can do our best to set you up for success. Show more Show less

Posted 4 days ago

Apply

20.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Description Job Title: Senior QA Engineer Location: Bangalore Position Type: Full-time Position Level: 3 Who We Are Xactly is a leader in Sales Performance Management Solutions and a part of Vista Equity Partners portfolio companies since 2017. The Xactly Intelligent Revenue Platform helps businesses improve go-to-market outcomes through increased collaboration, greater efficiencies, and connecting data from all critical functions of the revenue lifecycle on a single platform. Born in the cloud almost 20 years ago, Xactly provides customers with extensive experience in solving the most challenging problems customers of all sizes face, backed by almost 20 years of proprietary data and award-winning AI. Named among the best workplaces in the U.S. by Great Place to Work six times, honored on FORTUNE Magazine’s inaugural list of the 100 Best Workplaces for Millennials, and chosen as the “Market Leader in Incentive Compensation” by CRM magazine. We’re building a culture of success and are looking for motivated professionals to join us! The Team Xactly’s QE team is a rapidly growing & very well diversified team with a very strong focus on cutting-edge test automation tools & technologies. We are a very strong team of 35+ members spread across our engineering centers in San Jose, Denver and Bangalore (India). All engineers in the QE team are encouraged to operate independently and with highest levels of accountability. Each QE engineer works with a tight-knit team of back-end developers, front-end developers, Product Managers in the scrum teams with laser focus on producing high quality code & products for our customers. All QE engineers are trained well on all aspects i.e. Products training, Automation tools & infrastructure, CI/CD etc. ensuring their success in scrum teams. Xactly QE team members work with the cutting edge tools & technologies like Selenium Web Driver, JAVA, TestNg, Maven, RestAssured, Jenkins, Docker, Kubernetes, Harness, Snowflake, Terraform , Jmeter to name a few. The Opportunity As a Senior QA Engineer at Xactly Corporation, you will maintain/continuously improve upon the QE function and facilitate implementation of QE best practices within the organization. Establish partnerships with internal stakeholders to understand customer requirements and ensure quality of delivery. Own, drive, measure and optimize the overall quality of the development and delivery process. Drive quality automation and take the customer perspective for end to end quality. At Xactly, we believe everyone has a unique story to tell, and these small differences between us have a big impact. When bright, diverse minds come together, we’re challenged to think different ways, generate creative ideas, be more innovative, and take on new perspectives. Our customers come from different cultures and walks of life all around the world, and we believe our teams should reflect that to build strong and lasting relationships. THE SKILL SETS : Experience of 5-8 years with strong automation testing skills. Strong testing skills with ability to develop test strategy, design test plan, and test cases effectively and independently. Strong experience in GUI automation (such as Selenium) and API automation (such as JUnit) using off the shelves tools Experience in testing enterprise J2EE business applications. Strong SQL query knowledge in Postgresql or Oracle database. Experience in Mabl Testing tool is a plus point. Strong Experience as QA engineer in Scrum methodology requiring automated tests as definition of done Programming experience in language such as Java Experience in product based companies Nice-to-have Skills (all Other Skills Can Be Added Here) Working on a team in a SAFe Portfolio and ART Exposure on ETL/analytics modules Exposure on builds and deployments tools like Jenkins Exposure to build and deployment tools like Harness/Jenkins & Maven Within One Month, You’ll Attend New Hire Training Learn the Dev and QE processes Participate in scrum development process Get to know your team Within Three Months, You’ll Learn Xactly’s SaaS technology stack To gain complete domain and Xactly Product knowledge. Taking ownership of a module/project Perform code reviews Within Six Months, You’ll Ensure best QE practices are being used Working on multiple functionalities and taking ownership of respective module automation Perform RCA’s on Production Escapes and ensure corrective actions are implemented Within Twelve Months, You’ll Help grow other engineers technically by pairing and developing other learning opportunities. Training new joiners and peers in automation. Continuously work on QE process improvements to maximize team effectiveness and efficiencies Benefits & Perks Paid Time Off (PTO) Comprehensive Health and Accidental Insurance Coverage Tuition Reimbursement XactlyFit Gym/Fitness Program Reimbursement Free snacks onsite(if you work in office) Generous Employee Referral Program Free Parking and Subsidized Bus Pass (a go-green initiative!) Wellness program OUR VISION: Unleashing human potential to maximize company performance. We address a critical business need: to incentivize employees and align their behaviors with company goals. OUR CORE VALUES: Customer Focus | Accountability | Respect | Excellence (CARE) are the keys to our success, and each day we’re committed to upholding them by delivering the best we can to our customers. Xactly is proud to be an Equal Opportunity Employer. Xactly provides equal employment opportunities to all employees and applicants without regard to race, color, religion, sex, age, national origin, disability, veteran status, pregnancy, sexual orientation, or any other characteristic protected by law. This means we believe in celebrating diversity and creating an inclusive workplace environment, where everyone feels valued, heard, and has a sense of belonging. By doing this, everyone in the Xactly family has the power to make a difference and unleash their full potential. We do not accept resumes from agencies, headhunters, or other suppliers who have not signed a formal agreement with us. Show more Show less

Posted 4 days ago

Apply

4.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Role Summary Pfizer’s purpose is to deliver breakthroughs that change patients’ lives. Research and Development is at the heart of fulfilling Pfizer’s purpose as we work to translate advanced science and technologies into the therapies and vaccines that matter most. Whether you are in the discovery sciences, ensuring drug safety and efficacy or supporting clinical trials, you will apply cutting edge design and process development capabilities to accelerate and bring the best in class medicines to patients around the world. Pfizer is seeking a highly skilled and motivated AI Engineer to join our advanced technology team. The successful candidate will be responsible for developing, implementing, and optimizing artificial intelligence models and algorithms to drive innovation and efficiency in our Data Analytics and Supply Chain solutions. This role demands a collaborative mindset, a passion for cutting-edge technology, and a commitment to improving patient outcomes. Role Responsibilities Lead data modeling and engineering efforts within advanced data platforms teams to achieve digital outcomes. Provides guidance and may lead/co-lead moderately complex projects. Oversee the development and execution of test plans, creation of test scripts, and thorough data validation processes. Lead the architecture, design, and implementation of Cloud Data Lake, Data Warehouse, Data Marts, and Data APIs. Lead the development of complex data products that benefit PGS and ensure reusability across the enterprise. Collaborate effectively with contractors to deliver technical enhancements. Oversee the development of automated systems for building, testing, monitoring, and deploying ETL data pipelines within a continuous integration environment. Collaborate with backend engineering teams to analyze data, enhancing its quality and consistency. Conduct root cause analysis and address production data issues. Lead the design, develop, and implement AI models and algorithms to solve sophisticated data analytics and supply chain initiatives. Stay abreast of the latest advancements in AI and machine learning technologies and apply them to Pfizer's projects. Provide technical expertise and guidance to team members and stakeholders on AI-related initiatives. Document and present findings, methodologies, and project outcomes to various stakeholders. Integrate and collaborate with different technical teams across Digital to drive overall implementation and delivery. Ability to work with large and complex datasets, including data cleaning, preprocessing, and feature selection. Basic Qualifications A bachelor's or master’s degree in computer science, Artificial Intelligence, Machine Learning, or a related discipline. Over 4 years of experience as a Data Engineer, Data Architect, or in Data Warehousing, Data Modeling, and Data Transformations. Over 2 years of experience in AI, machine learning, and large language models (LLMs) development and deployment. Proven track record of successfully implementing AI solutions in a healthcare or pharmaceutical setting is preferred. Strong understanding of data structures, algorithms, and software design principles Programming Languages: Proficiency in Python, SQL, and familiarity with Java or Scala AI and Automation: Knowledge of AI-driven tools for data pipeline automation, such as Apache Airflow or Prefect. Ability to use GenAI or Agents to augment data engineering practices Preferred Qualifications Data Warehousing: Experience with data warehousing solutions such as Amazon Redshift, Google BigQuery, or Snowflake. ETL Tools: Knowledge of ETL tools like Apache NiFi, Talend, or Informatica. Big Data Technologies: Familiarity with Hadoop, Spark, and Kafka for big data processing. Cloud Platforms: Hands-on experience with cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP). Containerization: Understanding of Docker and Kubernetes for containerization and orchestration. Data Integration: Skills in integrating data from various sources, including APIs, databases, and external files. Data Modeling: Understanding of data modeling and database design principles, including graph technologies like Neo4j or Amazon Neptune. Structured Data: Proficiency in handling structured data from relational databases, data warehouses, and spreadsheets. Unstructured Data: Experience with unstructured data sources such as text, images, and log files, and tools like Apache Solr or Elasticsearch. Data Excellence: Familiarity with data excellence concepts, including data governance, data quality management, and data stewardship. Non-standard Work Schedule, Travel Or Environment Requirements Occasionally travel required Work Location Assignment: Hybrid The annual base salary for this position ranges from $96,300.00 to $160,500.00. In addition, this position is eligible for participation in Pfizer’s Global Performance Plan with a bonus target of 12.5% of the base salary and eligibility to participate in our share based long term incentive program. We offer comprehensive and generous benefits and programs to help our colleagues lead healthy lives and to support each of life’s moments. Benefits offered include a 401(k) plan with Pfizer Matching Contributions and an additional Pfizer Retirement Savings Contribution, paid vacation, holiday and personal days, paid caregiver/parental and medical leave, and health benefits to include medical, prescription drug, dental and vision coverage. Learn more at Pfizer Candidate Site – U.S. Benefits | (uscandidates.mypfizerbenefits.com). Pfizer compensation structures and benefit packages are aligned based on the location of hire. The United States salary range provided does not apply to Tampa, FL or any location outside of the United States. Relocation assistance may be available based on business needs and/or eligibility. Sunshine Act Pfizer reports payments and other transfers of value to health care providers as required by federal and state transparency laws and implementing regulations. These laws and regulations require Pfizer to provide government agencies with information such as a health care provider’s name, address and the type of payments or other value received, generally for public disclosure. Subject to further legal review and statutory or regulatory clarification, which Pfizer intends to pursue, reimbursement of recruiting expenses for licensed physicians may constitute a reportable transfer of value under the federal transparency law commonly known as the Sunshine Act. Therefore, if you are a licensed physician who incurs recruiting expenses as a result of interviewing with Pfizer that we pay or reimburse, your name, address and the amount of payments made currently will be reported to the government. If you have questions regarding this matter, please do not hesitate to contact your Talent Acquisition representative. EEO & Employment Eligibility Pfizer is committed to equal opportunity in the terms and conditions of employment for all employees and job applicants without regard to race, color, religion, sex, sexual orientation, age, gender identity or gender expression, national origin, disability or veteran status. Pfizer also complies with all applicable national, state and local laws governing nondiscrimination in employment as well as work authorization and employment eligibility verification requirements of the Immigration and Nationality Act and IRCA. Pfizer is an E-Verify employer. This position requires permanent work authorization in the United States. Information & Business Tech Show more Show less

Posted 4 days ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Title: Senior Java Backend Developer Experience: 6+ Years Location: Bangalore (On-Site) Employment Type: Full-Time Job Summary: We are looking for an experienced Senior Java Backend Developer with strong expertise in backend development using Java, data warehousing with Snowflake, and data visualization using Looker. The ideal candidate should have a deep understanding of microservices architecture, RESTful APIs, and cloud infrastructure, and be adept at working in Agile teams. Key Responsibilities: Design, develop, and maintain scalable and secure Java-based backend services and RESTful APIs . Implement and optimize data models , ETL pipelines , and queries using Snowflake . Develop interactive dashboards and reports using Looker for business insights. Manage dependencies and builds using Maven . Integrate and work with JIRA APIs to support project tracking and automation. Write robust unit (JUnit) and integration tests to ensure code quality. Utilize CI/CD pipelines and deploy applications in AWS cloud environments. Implement best practices for performance optimization, security, and code maintainability. Collaborate with cross-functional teams to gather requirements and deliver solutions. Design and manage databases like Postgres, MongoDB, MariaDB, RMQ , or other NoSQL servers. Participate in Agile/Scrum ceremonies including sprint planning, reviews, and retrospectives. Stay current with industry trends and emerging backend technologies. Required Skills: 6+ years of experience in Java backend development . Proficiency in Snowflake : data modeling, performance tuning, pipelines. Strong hands-on experience with Looker for reporting and visualization. Expertise in Maven , JUnit , and REST API development. Experience working with JIRA APIs . Familiarity with one or more of the following: MongoDB, MariaDB, RabbitMQ, Postgres , or other NoSQL/SQL databases. Hands-on experience with AWS services and CI/CD tools . Understanding of microservices architecture , design patterns, and best practices. Show more Show less

Posted 4 days ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Role Description Work closely with business stakeholders to understand their needs, objectives, and challenges. Elicit, document, and analyze business requirements, processes, and workflows. Translate business requirements into clear and concise functional specifications for technical teams. Collaborate with technology teams to design solutions that meet business needs. Propose innovative and practical solutions to address business challenges. Ensure that proposed solutions align with the organization's strategic goals and technological capabilities. Identify areas for process optimization and efficiency enhancement. Recommend process improvements and assist in their implementation. Must have very good knowledge on Health care domain and SQL. Good to have AWS and Snowflake technologies. Hands on Complex SQL queries (Snowflake) Knowledge of database management systems, both relational and non-relational Familiarity with data integration and ETL tools. Skills Business Analysis, Business System Analysis, SQL, Snowflake, SDLC, US Healthcare domain, Strong communication & documentation Show more Show less

Posted 4 days ago

Apply

Exploring ETL Jobs in India

The ETL (Extract, Transform, Load) job market in India is thriving with numerous opportunities for job seekers. ETL professionals play a crucial role in managing and analyzing data effectively for organizations across various industries. If you are considering a career in ETL, this article will provide you with valuable insights into the job market in India.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Pune
  4. Hyderabad
  5. Chennai

These cities are known for their thriving tech industries and often have a high demand for ETL professionals.

Average Salary Range

The average salary range for ETL professionals in India varies based on experience levels. Entry-level positions typically start at around ₹3-5 lakhs per annum, while experienced professionals can earn upwards of ₹10-15 lakhs per annum.

Career Path

In the ETL field, a typical career path may include roles such as: - Junior ETL Developer - ETL Developer - Senior ETL Developer - ETL Tech Lead - ETL Architect

As you gain experience and expertise, you can progress to higher-level roles within the ETL domain.

Related Skills

Alongside ETL, professionals in this field are often expected to have skills in: - SQL - Data Warehousing - Data Modeling - ETL Tools (e.g., Informatica, Talend) - Database Management Systems (e.g., Oracle, SQL Server)

Having a strong foundation in these related skills can enhance your capabilities as an ETL professional.

Interview Questions

Here are 25 interview questions that you may encounter in ETL job interviews:

  • What is ETL and why is it important? (basic)
  • Explain the difference between ETL and ELT processes. (medium)
  • How do you handle incremental loads in ETL processes? (medium)
  • What is a surrogate key in the context of ETL? (basic)
  • Can you explain the concept of data profiling in ETL? (medium)
  • How do you handle data quality issues in ETL processes? (medium)
  • What are some common ETL tools you have worked with? (basic)
  • Explain the difference between a full load and an incremental load. (basic)
  • How do you optimize ETL processes for performance? (medium)
  • Can you describe a challenging ETL project you worked on and how you overcame obstacles? (advanced)
  • What is the significance of data cleansing in ETL? (basic)
  • How do you ensure data security and compliance in ETL processes? (medium)
  • Have you worked with real-time data integration in ETL? If so, how did you approach it? (advanced)
  • What are the key components of an ETL architecture? (basic)
  • How do you handle data transformation requirements in ETL processes? (medium)
  • What are some best practices for ETL development? (medium)
  • Can you explain the concept of change data capture in ETL? (medium)
  • How do you troubleshoot ETL job failures? (medium)
  • What role does metadata play in ETL processes? (basic)
  • How do you handle complex transformations in ETL processes? (medium)
  • What is the importance of data lineage in ETL? (basic)
  • Have you worked with parallel processing in ETL? If so, explain your experience. (advanced)
  • How do you ensure data consistency across different ETL jobs? (medium)
  • Can you explain the concept of slowly changing dimensions in ETL? (medium)
  • How do you document ETL processes for knowledge sharing and future reference? (basic)

Closing Remarks

As you explore ETL jobs in India, remember to showcase your skills and expertise confidently during interviews. With the right preparation and a solid understanding of ETL concepts, you can embark on a rewarding career in this dynamic field. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies