Home
Jobs

10314 Etl Jobs - Page 14

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 5.0 years

6 - 13 Lacs

Gurgaon

On-site

GlassDoor logo

Role: Data Engineer Experience: 3–5 Years Location: Gurgaon - Onsite Notice Period: Immediate Key Skills Required Python Apache Spark Databricks Machine Learning (basic to intermediate understanding) ETL/Data Pipelines SQL (nice to have) Role Overview We’re looking for a Data Engineer with 3–5 years of experience to work on building and optimizing data pipelines using Python and Spark, with hands-on experience in Databricks. The ideal candidate should also have exposure to implementing machine learning models and collaborating across teams to deliver scalable data solutions. Responsibilities Build and maintain efficient, scalable data pipelines using Python and Apache Spark. Work closely with analytics and engineering teams to develop data-driven solutions. Use Databricks for processing, analyzing, and visualizing large datasets. Apply machine learning techniques for data insights and automation. Improve performance, reliability, and quality of data infrastructure. Monitor data integrity across the entire data lifecycle. Required Qualifications Strong hands-on experience with Python and Apache Spark. Proficient in working with Databricks for engineering workflows. Good understanding of machine learning concepts and ability to implement models. Familiarity with ETL processes, data warehousing, and SQL. Strong communication and problem-solving skills. Educational Background BE/BTech/BIT/MCA/BCA or a related technical degree. Job Type: Full-time Pay: ₹600,000.00 - ₹1,300,000.00 per year Schedule: Morning shift Work Location: In person

Posted 1 day ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

We’re seeking a talented and passionate Trainer to join our dynamic team in making a remarkable impact on the future of technology. The ideal candidate should have a strong base in technological concepts and a keen interest in delivery & mentoring. The role involves delivering best-in-class training sessions, supporting curriculum development, and providing hands-on guidance to learners. Responsibilities - What You’ll Do Training Coordination, Support & Delivery Assist in scheduling and coordinating training sessions Deliver classroom-based and virtual instructor-led training (ILT) sessions on various organizational products, platforms and technology Conduct hands-on training, workshops, and exercises to reinforce learning Manage training attendance records and assessments Learner Engagement Help ensuring access of relevant resources to learners Address learner queries by creating a positive learning environment Ensure smooth learning experience throughout the learning cycle Track learner’s progress through specific assessments and exercises Prepare learners for industry-standard certifications Curriculum Development Create structured learning paths for various experience levels Develop course materials, decks, and guides for training Update training content, available in various formats, based on industry trends and technological advancements, as and when applicable Prepare learners with practical applications of product offerings’ concepts Key Skills & Experience - What We’re Looking For Technical Skills Knowledge of any of the following technologies and industry advancements: Familiarity with GenAI Landscape, Machine Learning (ML), or a related area Proficiency in Data Engineering, Apache NiFi, Flow Files, Data Integration & Flow Management, ETL, and Data Warehousing concepts Knowledge of Python, SQL and other relevant programming languages Strong expertise in LCNC development (UI/UX Principles, Java, JavaScript frameworks) Experience with APIs and microservices Fundamental understanding of Web application development Training & Mentoring Skills Prior experience in conducting product-based or technology-based training sessions Ability to simplify complex technical concepts for easy understanding Must have delivery experience – both virtual and in-class trainings Excellent articulation, collaboration and mentoring skills Content Creation Experience in content creation and editing of training videos Qualifications & Experience Bachelor/Master’s degree in Computer Science, Engineering or a related field 5+ experience in cloud-based technologies or Artificial Intelligence (AI) Experience in training or coaching in a corporate or academic environment preferred Must have MS PowerPoint knowledge, Camtasia or other video editing skills Show more Show less

Posted 1 day ago

Apply

3.0 years

0 Lacs

Gurgaon

On-site

GlassDoor logo

Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space with over 17,000 stores in 31 countries, serving more than 6 million customers each day It is an exciting time to be a part of the growing Data Engineering team at Circle K. We are driving a well-supported cloud-first strategy to unlock the power of data across the company and help teams to discover, value and act on insights from data across the globe. With our strong data pipeline, this position will play a key role partnering with our Technical Development stakeholders to enable analytics for long term success. About the role We are looking for a Data Engineer with a collaborative, “can-do” attitude who is committed & strives with determination and motivation to make their team successful. A Data Engineer who has experience implementing technical solutions as part of a greater data transformation strategy. This role is responsible for hands on sourcing, manipulation, and delivery of data from enterprise business systems to data lake and data warehouse. This role will help drive Circle K’s next phase in the digital journey by transforming data to achieve actionable business outcomes. Roles and Responsibilities Collaborate with business stakeholders and other technical team members to acquire and migrate data sources that are most relevant to business needs and goals Demonstrate technical and domain knowledge of relational and non-relational databases, Data Warehouses, Data lakes among other structured and unstructured storage options Determine solutions that are best suited to develop a pipeline for a particular data source Develop data flow pipelines to extract, transform, and load data from various data sources in various forms, including custom ETL pipelines that enable model and product development Efficient in ELT/ETL development using Azure cloud services and Snowflake, including Testing and operational support (RCA, Monitoring, Maintenance) Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines for scalable analytics deliver Provide clear documentation for delivered solutions and processes, integrating documentation with the appropriate corporate stakeholders Identify and implement internal process improvements for data management (automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability) Stay current with and adopt new tools and applications to ensure high quality and efficient solutions Build cross-platform data strategy to aggregate multiple sources and process development datasets Proactive in stakeholder communication, mentor/guide junior resources by doing regular KT/reverse KT and help them in identifying production bugs/issues if needed and provide resolution recommendation Job Requirements Bachelor’s degree in Computer Engineering, Computer Science or related discipline, Master’s Degree preferred 3+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment 3+ years of experience with setting up and operating data pipelines using Python or SQL 3+ years of advanced SQL Programming: PL/SQL, T-SQL 3+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads 3+ years of strong and extensive hands-on experience in Azure, preferably data heavy / analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data 3+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions 3+ years of experience in defining and enabling data quality standards for auditing, and monitoring Strong analytical abilities and a strong intellectual curiosity. In-depth knowledge of relational database design, data warehousing and dimensional data modeling concepts Understanding of REST and good API design Experience working with Apache Iceberg, Delta tables and distributed computing frameworks Strong collaboration, teamwork skills, excellent written and verbal communications skills Self-starter and motivated with ability to work in a fast-paced development environment Agile experience highly desirable Proficiency in the development environment, including IDE, database server, GIT, Continuous Integration, unit-testing tool, and defect management tools Preferred Skills Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management) Strong working knowledge of Snowflake, including warehouse management, Snowflake SQL, and data sharing techniques Experience building pipelines that source from or deliver data into Snowflake in combination with tools like ADF and Databricks Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools Strong Experience in ETL/ELT development, QA and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance) Hands on experience in Databases like (Azure SQL DB, MySQL/, Cosmos DB etc.), File system (Blob Storage), Python/Unix shell Scripting ADF, Databricks and Azure certification is a plus Technologies we use : Databricks, Azure SQL DW/Synapse, Azure Tabular, Azure Data Factory, Azure Functions, Azure Containers, Docker, DevOps, Python, PySpark, Scripting (Powershell, Bash), Git, Terraform, Power BI, Snowflake #LI-DS1

Posted 1 day ago

Apply

10.0 years

2 - 8 Lacs

Gurgaon

On-site

GlassDoor logo

Requisition Number: 101362 Architect II Location: The role will be a hybrid position located in Delhi NCR, Hyderabad, Pune, Trivandrum and Bangalore, India Insight at a Glance 14,000+ engaged teammates globally #20 on Fortune’s World's Best Workplaces™ list $9.2 billion in revenue Received 35+ industry and partner awards in the past year $1.4M+ total charitable contributions in 2023 by Insight globally Now is the time to bring your expertise to Insight. We are not just a tech company; we are a people-first company. We believe that by unlocking the power of people and technology, we can accelerate transformation and achieve extraordinary results. As a Fortune 500 Solutions Integrator with deep expertise in cloud, data, AI, cybersecurity, and intelligent edge, we guide organisations through complex digital decisions. About the role The Architect-II Data will focus on leading our Business Intelligence (BI) and Data Warehousing (DW) initiatives. This role involves designing and implementing end-to-end data pipelines using cloud services and data frameworks. They will collaborate with stakeholders and ETL/BI developers in an agile environment to create scalable, secure data architectures ensuring alignment with business requirements, industry best practices, and regulatory compliance. Responsibilities: Architect and implement end-to-end data pipelines, data lakes, and warehouses using modern cloud services and architectural patterns. Develop and build analytics tools that deliver actionable insights to the business. Integrate and manage large, complex data sets to meet strategic business requirements. Optimize data processing workflows using frameworks such as PySpark. Establish and enforce best practices for data quality, integrity, security, and performance across the entire data ecosystem. Collaborate with cross-functional teams to prioritize deliverables and design solutions. Develop compelling business cases and return on investment (ROI) analyses to support strategic initiatives. Drive process improvements for enhanced data delivery speed and reliability. Provide technical leadership, training, and mentorship to team members, promoting a culture of excellence. Qualification: 10+ years in Business Intelligence (BI) solution design, with 8+ years specializing in ETL processes and data warehouse architecture. 8+ years of hands-on experience with Azure Data services including Azure Data Factory, Azure Databricks, Azure Data Lake Gen2, Azure SQL DB, Synapse, Power BI, and MS Fabric (Knowledge) Strong Python and PySpark software engineering proficiency, coupled with a proven track record of building and optimizing big data pipelines, architectures, and datasets. Proficient in transforming, processing, and extracting insights from vast, disparate datasets, and building robust data pipelines for metadata, dependency, and workload management. Familiarity with software development lifecycles/methodologies, particularly Agile. Experience with SAP/ERP/Datasphere data modeling is a significant plus. Excellent presentation and collaboration skills, capable of creating formal documentation and supporting cross-functional teams in a dynamic environment. Strong problem-solving, time management, and organizational abilities. Keen to learn new languages and technologies continually. Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or an equivalent field. What you can expect We’re legendary for taking care of you, your family and to help you engage with your local community. We want you to enjoy a full, meaningful life and own your career at Insight. Some of our benefits include: Freedom to work from another location—even an international destination—for up to 30 consecutive calendar days per year. But what really sets us apart are our core values of Hunger, Heart, and Harmony, which guide everything we do, from building relationships with teammates, partners, and clients to making a positive impact in our communities. Join us today, your ambITious journey starts here. Insight is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veteran status, sexual orientation or any other characteristic protected by law. When you apply, please tell us the pronouns you use and any reasonable adjustments you may need during the interview process. At Insight, we celebrate diversity of skills and experience so even if you don’t feel like your skills are a perfect match - we still want to hear from you! Insight is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veteran status, sexual orientation or any other characteristic protected by law. Insight India Location:Level 16, Tower B, Building No 14, Dlf Cyber City In It/Ites Sez, Sector 24 &25 A Gurugram Gurgaon Hr 122002 India

Posted 1 day ago

Apply

4.0 - 8.0 years

25 - 30 Lacs

Pune

Hybrid

Naukri logo

So, what’s t he r ole all about? As a Data Engineer, you will be responsible for designing, building, and maintaining large-scale data systems, as well as working with cross-functional teams to ensure efficient data processing and integration. You will leverage your knowledge of Apache Spark to create robust ETL processes, optimize data workflows, and manage high volumes of structured and unstructured data. How will you make an impact? Design, implement, and maintain data pipelines using Apache Spark for processing large datasets. Work with data engineering teams to optimize data workflows for performance and scalability. Integrate data from various sources, ensuring clean, reliable, and high-quality data for analysis. Develop and maintain data models, databases, and data lakes. Build and manage scalable ETL solutions to support business intelligence and data science initiatives. Monitor and troubleshoot data processing jobs, ensuring they run efficiently and effectively. Collaborate with data scientists, analysts, and other stakeholders to understand business needs and deliver data solutions. Implement data security best practices to protect sensitive information. Maintain a high level of data quality and ensure timely delivery of data to end-users. Continuously evaluate new technologies and frameworks to improve data engineering processes. Have you got what it takes? 8-11 years of experience as a Data Engineer, with a strong focus on Apache Spark and big data technologies. Expertise in Spark SQL , DataFrames , and RDDs for data processing and analysis. Proficient in programming languages such as Python , Scala , or Java for data engineering tasks. Hands-on experience with cloud platforms like AWS , specifically with data processing and storage services (e.g., S3 , BigQuery , Redshift , Databricks ). Experience with ETL frameworks and tools such as Apache Kafka , Airflow , or NiFi . Strong knowledge of data warehousing concepts and technologies (e.g., Redshift , Snowflake , BigQuery ). Familiarity with containerization technologies like Docker and Kubernetes . Knowledge of SQL and relational databases, with the ability to design and query databases effectively. Solid understanding of distributed computing, data modeling, and data architecture principles. Strong problem-solving skills and the ability to work with large and complex datasets. Excellent communication and collaboration skills to work effectively with cross-functional teams. You will have an advantage if you also have: Knowledge of SQL and relational databases, with the ability to design and query databases effectively. Solid understanding of distributed computing, data modeling, and data architecture principles. Strong problem-solving skills and the ability to work with large and complex datasets. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7235 Reporting into: Tech Manager Role Type: Individual Contributor

Posted 1 day ago

Apply

0 years

0 Lacs

Gurgaon

On-site

GlassDoor logo

Must have Strong Postgres DB Knowledge . Writing procedures and functions ,Writing dynamic code , Performance tuning in PostgreSQL and complex queries , UNIX. Good to have : IDMC or any other ETL tool knowledge, Airflow DAG , python , MS calls. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 1 day ago

Apply

0 years

0 Lacs

Gurgaon

On-site

GlassDoor logo

Job Description: We are seeking a skilled PL/SQL Developer with hands-on experience in the Insurance domain , especially with Ingenium (Policy Administration System). The ideal candidate will support and enhance legacy systems, contribute to data migration projects, and collaborate closely with business and technical teams to ensure seamless insurance operations. Key Responsibilities: Develop and maintain complex PL/SQL scripts, procedures, triggers, and packages. Work on enhancements, bug fixes, and performance tuning of Oracle-based insurance applications. Support and maintain Ingenium PAS for life insurance policies. Participate in data analysis, ETL processing, and migration activities from Ingenium. Collaborate with business analysts, QA teams, and end-users to deliver solutions aligned with business needs. Document technical specifications and workflows for future reference. Required Skills: Strong hands-on experience in Oracle PL/SQL development. Experience working with Ingenium (Life/Annuities Policy Administration System). Understanding of insurance products like life, annuities, riders, underwriting, and claims. Experience with batch processing, UAT support , and production issue resolution. Familiarity with SDLC methodologies, Agile/Scrum is a plus. Preferred Qualifications: Knowledge of mainframe/COBOL systems is a plus (if Ingenium is on mainframe). Experience in data migration projects involving Ingenium. Bachelor's degree in Computer Science or related field. Job Type: Full-time Pay: ₹100,000.00 - ₹1,100,000.00 per month Schedule: Day shift Work Location: In person

Posted 1 day ago

Apply

8.0 years

20 - 28 Lacs

Gurgaon

On-site

GlassDoor logo

Job Title: Tableau Developer Location: Gurgaon (Work Form Office) Job Type: Full Time Role Experience Level: 8-12 Years Job Summary: We are seeking a talented Tableau Developer to join our Business Intelligence and Analytics team. The ideal candidate will be responsible for designing, developing, and maintaining visually compelling and insightful dashboards and reports using Tableau. You will work closely with business stakeholders to understand requirements, translate data into actionable insights, and support data-driven decision-making. Key Responsibilities: Design and develop interactive Tableau dashboards, visualizations, and reports based on business needs. Collaborate with business analysts, data engineers, and stakeholders to gather requirements and define KPIs. Optimize dashboard performance and usability. Write complex SQL queries to extract and transform data from various sources (e.g., SQL Server, Oracle, Snowflake). Conduct data validation and ensure data quality and accuracy. Schedule and publish dashboards to Tableau Server / Tableau Online for end-user access. Provide training, documentation, and support to business users. Required Skills and Qualifications: Bachelor’s degree in Computer Science, Information Systems, Statistics, or related field. 8-12+ years of hands-on experience with Tableau Desktop and Tableau Server. Proficiency in SQL for data manipulation and analysis. Strong understanding of data warehousing concepts and relational databases. Ability to analyze large datasets and turn them into meaningful visual insights. Experience with data blending, LOD (Level of Detail) expressions, filters, parameters, and calculated fields in Tableau. Preferred Qualifications: Experience with cloud data platforms (e.g., Snowflake, Redshift, BigQuery). Knowledge of ETL tools (e.g., Alteryx, Talend, Informatica) or scripting languages (Python, R). Understanding of data governance and security principles. Tableau certification (Desktop Specialist, Certified Associate, etc.) is a plus. Exposure to Agile methodologies. Job Type: Full-time Pay: ₹2,000,000.00 - ₹2,800,000.00 per year Work Location: In person

Posted 1 day ago

Apply

6.0 years

0 - 0 Lacs

Gurgaon

On-site

GlassDoor logo

Job Description: We are seeking a highly skilled and experienced Senior BI Developer / SQL Developer to join our team. The ideal candidate will have strong proficiency in SQL, hands-on experience with BI tools, and a deep understanding of data modeling, ETL processes, and data warehousing concepts. You will work closely with cross-functional teams to design, develop, and maintain robust reporting and analytics solutions that support key business decisions. Key Responsibilities: Develop, maintain, and optimize complex SQL queries, stored procedures, and scripts across RDBMS such as MySQL or PostgreSQL. Design and build interactive dashboards and reports using BI tools such as Dundas BI , Power BI , Tableau , or Cognos . Translate business requirements into technical solutions using data modeling and database design best practices. Implement and support ETL processes to integrate data from various sources into data warehouses. Monitor and tune database performance, ensuring high availability and efficiency. Collaborate with business analysts, data engineers, and stakeholders to deliver high-quality, data-driven insights. Work in Agile/Scrum teams, actively participating in sprints, stand-ups, and retrospectives. Assist in migrating data and reporting solutions to cloud platforms like Azure or AWS . Provide documentation, training, and support to end-users on report usage and self-service BI tools. Ensure data integrity, security, and governance across reporting systems. Required Qualifications: Bachelor’s degree in Computer Science , Information Systems , Engineering , or a related field. 6+ years of experience as a Report Writer , BI Developer , or SQL Developer . Advanced proficiency in SQL and experience with MySQL, PostgreSQL, or similar RDBMS. Proven experience with BI/reporting tools like Dundas BI, Power BI, Tableau, or Cognos. Strong understanding of data modeling , relational database design , and data warehousing concepts. Familiarity with ETL tools and performance tuning of large datasets. Exposure to cloud environments such as Microsoft Azure or AWS is a plus. Excellent problem-solving and analytical skills with attention to detail. FOR IMMIDIATE RESPONSE SEND YOUR UPDATED CV TO: amrit@qapsoftware.com Job Type: Full-time Pay: ₹80,000.00 - ₹91,000.00 per month Application Question(s): How many years of experience you are having in IT ? How many years of experience you are having in RDBMS ? How many years of experience you are having in Data Modeling and Data Warehousing ? How many years of experience you are having in BI tools ? Work Location: In person

Posted 1 day ago

Apply

3.0 - 5.0 years

8 - 16 Lacs

Bengaluru

Hybrid

Naukri logo

OVERVIEW The Data Engineer will work closely with clients and the eCS Biometrics team to optimize the elluminate platform for end-to-end solutions to aggregate, transform, access and report on clinical data throughout the life cycle of a clinical trial. This includes study design in elluminate, collaboration on specifications, and configuration of the various modules to including Data Central, Clinical Data Analytics and Trial Operational Analytics, Risk-Based Quality Management (RBQM), Statistical Computing Environment (SCE) and Operational Insights. The Data Engineer will be involved in standard ETL activities as well as programming custom listings, visualizations and analytics tools using Mapper and Qlik. The position involves a high level of quality control as well as adherence to standard operation procedures and work instructions and a constant drive towards automation and process improvement. KEY TASKS & RESPONSIBILITIES Design, develop, test, and deploy highly efficient code for supporting SDTM, Custom reports and Visualizations using tools like MS SQL, elluminate® Mapper and Qlik Configure ETL processes to support of the aggregation and standardization of clinical data from various sources including EDC systems, SAS and central laboratory vendors Work with Analytics developers, other team members and clients to review the business requirements and translate them into database objects and visualizations Manage multiple timelines and deliverables (for single or multiple clients) and managing client communications as assigned Provide diagnostic support and fix defects as needed Ensure compliance with eClinical Solutions/industry quality standards, regulations, guidelines, and procedures Other duties as assigned CANDIDATE’S PROFILE Education & Experience 3+ years of professional experience preferred Bachelor's degree or equivalent experience preferred Experience with database/warehouse architecture, design and development preferred Knowledge of various data platforms and warehouses including SQL Server, DB2, Teradata, AWS, Azure, Snowflake, etc. Understanding of Cloud / Hybrid data architecture concepts is a plus Knowledge of clinical trial data is a plus - CDISC ODM, SDTM, or ADAM standards Experience in Pharmaceutical/Biotechnology/Life Science industry is a plus Professional Skills Critical thinking, problem solving and strong initiative Communication and task management skills while working with technical and non-technical teams (both internal to eCS and clients) Must be team oriented with strong collaboration, prioritization, and adaptability skills Excellent knowledge of English; verbal and written communication skills with ability to interact with users and clients providing solutions Excited to learn new tools and product modules and adapt to changing technology and requirements Experience in the Life Sciences industry, CRO / Clinical Trial regulated environment preferred Technical Skills Proficient in SQL, T-SQL, PL/SQL programing Experience in Microsoft Office Applications, specifically MS Project and MS Excel Familiarity with multiple Database Platforms: Oracle, SQL Server, Teradata, DB2 Oracle Familiarity with Data Reporting Tools: QlikSense, QlikView, Spotfire, Tableau, JReview, Business Objects, Cognos, MicroStrategy, IBM DataStage, Informatica, Spark or related Familiarity with other languages and concepts: .NET, C#, Python, R, Java, HTML, SSRS, AWS, Azure, Spark, REST APIs, Big Data, ETL, Data Pipelines, Data Modelling, Data Analytics, BI, Data Warehouse, Data Lake or related

Posted 1 day ago

Apply

4.0 years

0 - 0 Lacs

Mohali

On-site

GlassDoor logo

Job Description : Should have 4+ years hands-on experience in algorithms and implementation of analytics solutions in predictive analytics, text analytics and image analytics Should have handson experience in leading a team of data scientists, works closely with client’s technical team to plan, develop and execute on client requirements providing technical expertise and project leadership. Leads efforts to foster innovative ideas for developing high impact solutions. Evaluates and leads broad range of forward looking analytics initiatives, track emerging data science trends, and knowledge sharing Engaging key stakeholders to source, mine and validate data and findings and to confirm business logic and assumptions in order to draw conclusions. Helps in design and develop advanced analytic solutions across functional areas as per requirement/opportunities. Technical Role and Responsibilities Demonstrated strong capability in statistical/Mathematical modelling or Machine Learning or Artificial Intelligence Demonstrated skills in programming for implementation and deployment of algorithms preferably in Statistical/ML based programming languages in Python Sound Experience with traditional as well as modern statistical techniques, including Regression, Support Vector Machines, Regularization, Boosting, Random Forests, and other Ensemble Methods; Visualization tool experience - preferably with Tableau or Power BI Sound knowledge of ETL practices preferably spark in Data Bricks cloud big data technologies like AWS, Google, Microsoft, or Cloudera. Communicate complex quantitative analysis in a lucid, precise, clear and actionable insight. Developing new practices and methodologies using statistical methods, machine learning and predictive models under mentorship. Carrying out statistical and mathematical modelling, solving complex business problems and delivering innovative solutions using state of the art tools and cutting-edge technologies for big data & beyond. Preferred to have Bachelors/Masters in Statistics/Machine Learning/Data Science/Analytics Should be a Data Science Professional with a knack for solving problems using cutting-edge ML/DL techniques and implementing solutions leveraging cloud-based infrastructure. Should be strong in GCP, TensorFlow, Numpy, Pandas, Python, Auto ML, Big Query, Machine learning, Artificial intelligence, Deep Learning Exposure to below skills: Preferred Tech Skills : Python, Computer Vision,Machine Learning,RNN,Data Visualization,Natural Language Processing,Voice Modulation,Speech to text,Spicy,Lstm,Object Detection,Sklearn,Numpy, NLTk,Matplotlib,Cuinks, seaborn,Imageprocessing, NeuralNetwork,Yolo, DarkFlow,DarkNet,Pytorch, CNN,Tensorflow,Keras,Unet, ImageSegmentation,ModeNet OCR,OpenCV,Pandas,Scrapy, BeautifulSoup,LabelImg ,GIT. Machine Learning, Deep Learning, Computer Vision, Natural Language Processing,Statistics Programming Languages-Python Libraries & Software Packages- Tensorflow, Keras, OpenCV, Pillow, Scikit-Learn, Flask, Numpy, Pandas, Matplotlib,Docker Cloud Services- Compute Engine, GCP AI Platform, Cloud Storage, GCP AI & MLAPIs Job Types: Full-time, Permanent, Fresher Pay: ₹30,000.00 - ₹80,000.00 per month Education: Bachelor's (Preferred) Experience: Machine learning: 4 years (Preferred) Work Location: In person

Posted 1 day ago

Apply

3.0 years

0 Lacs

Delhi

Remote

GlassDoor logo

Apache Superset Data Engineer Experience : 3 - 6 years Bhubaneswar, Delhi - NCR, Remote Working About the Job Featured The Apache Superset Data Engineer plays a key role in designing, developing, and maintaining scalable data pipelines and analytics infrastructure, with a primary emphasis on data visualization and dashboarding using Apache Superset. This role sits at the intersection of data engineering and business intelligence, enabling stakeholders to access accurate, actionable insights through intuitive dashboards and reports. Core Responsibilities Create, customize, and maintain interactive dashboards in Apache Superset to support KPIs, experimentation, and business insights Work closely with analysts, BI teams, and business users to gather requirements and deliver effective Superset-based visualizations Perform data validation, feature engineering, and exploratory data analysis to ensure data accuracy and integrity Analyze A/B test results and deliver insights that inform business strategies Establish and maintain standards for statistical testing, data validation, and analytical workflows Integrate Superset with various database systems (e.g., MySQL, PostgreSQL) and manage associated drivers and connections Ensure Superset deployments are secure, scalable, and high-performing Clearly communicate findings and recommendations to both technical and non-technical stakeholders Required Skills Proven expertise in building dashboards and visualizations using Apache Superset Strong command of SQL and experience working with relational databases like MySQL, or PostgreSQL Proficiency in Python (or Java) for data manipulation and workflow automation Solid understanding of data modelling, ETL/ELT pipelines, and data warehousing principles Excellent problem-solving skills and a keen eye for data quality and detail Strong communication skills, with the ability to simplify complex technical concepts for non-technical audiences Nice to have familiarity with cloud platforms (AWS, ECS) Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field 3+ yrs of relevant experience

Posted 1 day ago

Apply

0 years

0 Lacs

Delhi

On-site

GlassDoor logo

Job requisition ID :: 84391 Date: Jun 16, 2025 Location: Delhi Designation: Consultant Entity: ETL

Posted 1 day ago

Apply

4.0 - 6.0 years

0 Lacs

Bhubaneshwar

On-site

GlassDoor logo

Position: Data Migration Engineer (NV46FCT RM 3324) Required Qualifications: 4–6 years of experience in data migration, data integration, and ETL development. Hands-on experience with both relational (PostgreSQL, MySQL, Oracle, SQL Server) and NoSQL (MongoDB, Cassandra, DynamoDB) databases Experience in Google BigQuery for data ingestion, transformation, and performance optimization. Proficiency in SQL and scripting languages such as Python or Shell for custom ETL logic. Familiarity with ETL tools like Talend, Apache NiFi, Informatica, or AWS Glue. Experience working in cloud environments such as AWS, GCP, or Azure. Solid understanding of data modeling, schema design, and transformation best practices. Preferred Qualifications: Experience in BigQuery optimization, federated queries, and integration with external data sources. Exposure to data warehouses and lakes such as Redshift, Snowflake, or BigQuery. Experience with streaming data ingestion tools like Kafka, Debezium, or Google Dataflow. Familiarity with workflow orchestration tools such as Apache Airflow or DBT. Knowledge of data security, masking, encryption, and compliance requirements in migration scenarios. Soft Skills: Strong problem-solving and analytical mindset with high attention to data quality. Excellent communication and collaboration skills to work with engineering and client teams. Ability to handle complex migrations under tight deadlines with minimal supervision. ******************************************************************************************************************************************* Job Category: Digital_Cloud_Web Technologies Job Type: Full Time Job Location: BhubaneshwarNoida Experience: 4-6 years Notice period: 0-30 days

Posted 1 day ago

Apply

5.0 years

0 Lacs

Orissa

Remote

GlassDoor logo

No. of Positions: 1 Position: Lead Data Engineer Location: Hybrid or Remote Total Years of Experience: 5+ years Key Responsibilities: Build ETL (extract, transform, and loading) jobs using Fivetran and dbt for our internal projects and for customers that use various platforms like Azure, Salesforce and AWS technologies Monitoring active ETL jobs in production. Build out data lineage artifacts to ensure all current and future systems are properly documented. Assist with the build out design/mapping documentation to ensure development is clear and testable for QA and UAT purposes. Assess current and future data transformation needs to recommend, develop, and train new data integration tool technologies. Discover efficiencies with shared data processes and batch schedules to help ensure no redundancy and smooth operations. Assist the Data Quality Analyst to implement checks and balances across all jobs to ensure data quality throughout the entire environment for current and future batch jobs. Hands-on experience in developing and implementing large-scale data warehouses. Business Intelligence and MDM solutions, including Data Lakes/Data Vaults. Required Skills: This job has no supervisory responsibilities. Bachelor’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years’ experience in business analytics, data science, software development, data modeling or data engineering work. 5+ years’ experience with a strong proficiency with SQL query/development skills. Develop ETL routines that manipulate and transfer large volumes of data and perform quality checks. Hands-on experience with ETL tools (e.g Informatica, Talend, dbt, Azure Data Factory). Experience working in the healthcare industry with PHI/PII. Creative, lateral, and critical thinker. Excellent communicator. Well-developed interpersonal skills. Good at prioritizing tasks and time management. Ability to describe, create and implement new solutions. Experience with related or complementary open source software platforms and languages (e.g. Java, Linux, Apache, Perl/Python/PHP, Chef). Knowledge / Hands-on experience with BI tools and reporting software (e.g. Cognos, Power BI, Tableau). Don’t see a role that fits? We are growing rapidly and always on the lookout for passionate and smart engineers! If you are passionate about your career, reach out to us at careers@hashagile.com.

Posted 1 day ago

Apply

3.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Consulting - Data and Analytics – GIG - Data Modeller EY's Consulting Services is a unique, industry-focused business unit that provides a broad range of integrated services that leverage deep industry experience with strong functional and technical capabilities and product knowledge. EY’s financial services practice provides integrated Consulting services to financial institutions and other capital markets participants, including commercial banks, retail banks, investment banks, broker-dealers & asset management firms, and insurance firms from leading Fortune 500 Companies. Within EY’s Consulting Practice, Data and Analytics team solves big, complex issues and capitalize on opportunities to deliver better working outcomes that help expand and safeguard the businesses, now and in the future. This way we help create a compelling business case for embedding the right analytical practice at the heart of client’s decision-making. T he opportunity We’re looking for a candidate with 3-7 years of expertise in data science, data analysis and visualization skills.Act as Technical Lead to a larger team in EY GDS DnA team to work on various Data and Analytics projects Your Key Responsibilities Lead and mentor a team throughout design, development and delivery phases and keep the team intact on high pressure situations. Work as a Senior team member to contribute in various technical streams EY DnA implementation project. Client focused with good presentation, communication and relationship building skills. Completion of assigned tasks on time and regular status reporting to the lead Collaborate with technology team and support the development of analytical models with the effective use of data and analytic techniques and validate the model results and articulate the insights to the business team Interface and communicate with the onsite teams directly to understand the requirement and determine the optimum solutions Create technical solutions as per business needs by translating their requirements and finding innovative solution options Provide product and design level functional and technical expertise along with best practices Get involved in business development activities like creating proof of concepts (POCs), point of views (POVs), assist in proposal writing and service offering development, and capable of developing creative power point content for presentations Participate in organization-level initiatives and operational activities Ensure continual knowledge management and contribute to internal L&D teams Building a quality work culture and Foster teamwork and lead by example Skills and attributes for success Use an issue-based approach to deliver growth, market and portfolio strategy engagements for corporates Strong communication, presentation and team building skills and experience in producing high quality reports, papers, and presentations. Experience in executing and managing research and analysis of companies and markets, preferably from a commercial due diligence standpoint To qualify for the role, you must have BE/BTech/MCA/MBA with 3+ years of industry experience with machine learning, visualization, data science and related offerings. At least around 3+ years of experience in BI and Analytics. To be have ability to do end to end data solutions from analysis, mapping, profiling, ETL architecture and data modelling. Knowledge and experience of at least 1 Insurance domain engagement life or Property n Causality. Understanding of Business Intelligence, Data Warehousing and Data Modelling. Good experience using CA Erwin or other similar modelling tool is absolute must. Experience of working in Guidewire DataHub & InfoCenter skills. Strong knowledge of relational and dimensional data modelling concepts Develop logical and physical data flow models for ETL applications. Translate data access, transformation and movement requirements into functional requirements and mapping designs. Strong knowledge of data architecture, database structure , data analysis and SQL skills Experience in data management analysis. Analyse business objectives and evaluate data solutions to meet customer needs. Establishing scalable, efficient, automated processes for large scale data analyses and management Prepare and analyse historical data and identify patterns To collaborate with technology team and support the development of analytical models with the effective use of data and analytic techniques. To validate the model results and articulate the insights to the business team. Drive the Business requirements gathering for analytics projects Intellectual curiosity - eagerness to learn new things Experience with unstructured data is added advantage Ability to effectively visualize and communicate analysis results Experience with big data and cloud preferred Experience, interest and adaptability to working in an Agile delivery environment. Ability to work in a fast-paced environment where change is a constant and ability to handle ambiguous requirements Exceptional interpersonal and communication skills (written and verbal) Ideally, you’ll also have Good exposure to any ETL tools. Good to have knowledge about P&C insurance. Understanding of Business Intelligence, Data Warehousing and Data Modelling. Must have led a team size of at least 4 members. Experience in Insurance and Banking domain Prior Client facing skills, Self-motivated and collaborative What We Look For A Team of people with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment An opportunity to be a part of market-leading, multi-disciplinary team of 1400 + professionals, in the only integrated global transaction business worldwide. Opportunities to work with EY Consulting practices globally with leading businesses across a range of industries What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Noida

On-site

GlassDoor logo

Posted On: 16 Jun 2025 Location: Noida, UP, India Company: Iris Software Why Join Us? Are you inspired to grow your career at one of India’s Top 25 Best Workplaces in IT industry? Do you want to do the best work of your life at one of the fastest growing IT services companies ? Do you aspire to thrive in an award-winning work culture that values your talent and career aspirations ? It’s happening right here at Iris Software. About Iris Software At Iris Software, our vision is to be our client’s most trusted technology partner, and the first choice for the industry’s top professionals to realize their full potential. With over 4,300 associates across India, U.S.A, and Canada, we help our enterprise clients thrive with technology-enabled transformation across financial services, healthcare, transportation & logistics, and professional services. Our work covers complex, mission-critical applications with the latest technologies, such as high-value complex Application & Product Engineering, Data & Analytics, Cloud, DevOps, Data & MLOps, Quality Engineering, and Business Automation. Working at Iris Be valued, be inspired, be your best. At Iris Software, we invest in and create a culture where colleagues feel valued, can explore their potential, and have opportunities to grow. Our employee value proposition (EVP) is about “Being Your Best” – as a professional and person. It is about being challenged by work that inspires us, being empowered to excel and grow in your career, and being part of a culture where talent is valued. We’re a place where everyone can discover and be their best version. Job Description ETL, Shell / Python Scripting Hadoop Cloudera Data Lake Golden Source or Markit EDM Database Expertise DevOps CI/CD Experience Mandatory Competencies DevOps - Shell Scripting Python - Python ETL - Azure Data Factory Data on Cloud - Azure Data Lake (ADL) DevOps - CI/CD Database - PL SQL Database - Oracle Database - SQL Beh - Communication and collaboration Perks and Benefits for Irisians At Iris Software, we offer world-class benefits designed to support the financial, health and well-being needs of our associates to help achieve harmony between their professional and personal growth. From comprehensive health insurance and competitive salaries to flexible work arrangements and ongoing learning opportunities, we're committed to providing a supportive and rewarding work environment. Join us and experience the difference of working at a company that values its employees' success and happiness.

Posted 1 day ago

Apply

7.0 years

6 - 7 Lacs

Noida

On-site

GlassDoor logo

At Cotality, we are driven by a single mission—to make the property industry faster, smarter, and more people-centric. Cotality is the trusted source for property intelligence, with unmatched precision, depth, breadth, and insights across the entire ecosystem. Our talented team of 5,000 employees globally uses our network, scale, connectivity and technology to drive the largest asset class in the world. Join us as we work toward our vision of fueling a thriving global property ecosystem and a more resilient society. Cotality is committed to cultivating a diverse and inclusive work culture that inspires innovation and bold thinking; it's a place where you can collaborate, feel valued, develop skills and directly impact the real estate economy. We know our people are our greatest asset. At Cotality, you can be yourself, lift people up and make an impact. By putting clients first and continuously innovating, we're working together to set the pace for unlocking new possibilities that better serve the property industry. Job Description: In India, we operate as Next Gear India Private Limited, a fully-owned subsidiary of Cotality with offices in Kolkata, West Bengal, and Noida, Uttar Pradesh. Next Gear India Private Limited plays a vital role in Cotality's Product Development capabilities, focusing on creating and delivering innovative solutions for the Property & Casualty (P&C) Insurance and Property Restoration industries. While Next Gear India Private Limited operates under its own registered name in India, we are seamlessly integrated into the Cotality family, sharing the same commitment to innovation, quality, and client success. When you join Next Gear India Private Limited, you become part of the global Cotality team. Together, we shape the future of property insights and analytics, contributing to a smarter and more resilient property ecosystem through cutting-edge technology and insights. Company Description At CoreLogic, we are driven by a single mission—to make the property industry faster, smarter, and more people-centric. CoreLogic is the trusted source for property intelligence, with unmatched precision, depth, breadth, and insights across the entire ecosystem. Our talented team of 5,000 employees globally uses our network, scale, connectivity, and technology to drive the largest asset class in the world. Join us as we work toward our vision of fueling a thriving global property ecosystem and a more resilient society. CoreLogic is committed to cultivating a diverse and inclusive work culture that inspires innovation and bold thinking; it's a place where you can collaborate, feel valued, develop skills, and directly impact the insurance marketplace. We know our people are our greatest asset. At CoreLogic, you can be yourself, lift people up and make an impact. By putting clients first and continuously innovating, we're working together to set the pace for unlocking new possibilities that better serve the property insurance and restoration industry. Description We are seeking a highly skilled Lead Data Analyst to join our Analytics team to serve customers across the property insurance and restoration industries. As a Lead Data Analyst you will play a crucial role in developing methods and models to inform data-driven decision processes resulting in improved business performance for both internal and external stakeholder groups. You will be responsible for interpreting complex data sets and providing valuable insights to enhance the value of data assets. The successful candidate will have a strong understanding of data mining techniques, methods of statistical analysis, and data visualization tools. This position offers an exciting opportunity to work in a dynamic environment, collaborating with cross-functional teams to support decision processes that will guide the respective industries into the future. Responsibilities Collaborate with cross-functional teams to understand and document requirements for analytics products. Serve as the primary point of contact for new data/analytics requests and support for customers. Lead a team of analysts to deliver client deliverables on a timely manner. Act as the domain expert and voice of the customer to internal stakeholders during the analytics development process. Develop and maintain an inventory of data, reporting, and analytic product deliverables for assigned customers. Work with customer success teams to establish and maintain appropriate customer expectations for analytics deliverables. Create and manage tickets on behalf of customers within internal frameworks. Ensure timely delivery of assets to customers and aid in the development of internal processes for the delivery of analytics deliverables. Work with IT/Infrastructure teams to provide customer access to assets and support internal audit processes to ensure data security. Create and optimize complex SQL queries for data extraction, transformation, and aggregation. Develop and maintain data models, dashboards, and reports to visualize data and track key performance metrics. Conduct validation checks and implement error handling mechanisms to ensure data reliability. Collaborate closely with stakeholders to align project goals with business needs and perform ad-hoc analysis to provide actionable recommendations. Analyze large and complex datasets to identify trends, patterns, and insights, and present findings and recommendations to stakeholders in a clear and concise manner Job Qualifications: 7+ years’ property insurance experience preferred 5+ years’ experience in management of mid-level professional teams or similar leadership position with a focus on data and/or performance management. Extensive experience in applying and/or developing performance management metrics for claims organizations. Bachelor’s degree in computer science, data science, statistics, or a related field is preferred. Mastery level knowledge of data analysis tools such as Excel, Tableau or Power BI. Demonstrated expertise in Power BI creating reports and dashboards, including the ability to connect to various data sources, prepare and model data, and create visualizations. Expert knowledge of DAX for creating calculated columns and measures to meet report-specific requirements. Expert knowledge of Power Query for importing, transforming, and shaping data. Proficiency in SQL with the ability to write complex queries and optimize performance. Strong knowledge of ETL processes, data pipeline and automation a plus. Proficiency in managing tasks with Jira is an advantage. Strong analytical and problem-solving skills. Excellent attention to detail and the ability to work with large datasets. Effective communication skills, both written and verbal. Excellent visual communications and storytelling with data skills. Ability to work independently and collaborate in a team environment. Cotality's Diversity Commitment: Cotality is fully committed to employing a diverse workforce and creating an inclusive work environment that embraces everyone’s unique contributions, experiences and values. We offer an empowered work environment that encourages creativity, initiative and professional growth and provides a competitive salary and benefits package. We are better together when we support and recognize our differences. Equal Opportunity Employer Statement: Cotality is an Equal Opportunity employer committed to attracting and retaining the best-qualified people available, without regard to race, ancestry, place of origin, colour, ethnic origin, citizenship, creed, sex, sexual orientation, record of offences, age, marital status, family status or disability. Cotality maintains a Drug-Free Workplace. Please apply on our website for consideration. Privacy Policy Global Applicant Privacy Policy By providing your telephone number, you agree to receive automated (SMS) text messages at that number from Cotality regarding all matters related to your application and, if you are hired, your employment and company business. Message & data rates may apply. You can opt out at any time by responding STOP or UNSUBSCRIBING and will automatically be opted out company-wide. Connect with us on social media! Click on the quicklinks below to find out more about our company and associates

Posted 1 day ago

Apply

3.0 years

0 Lacs

India

Remote

Linkedin logo

Ready to be pushed beyond what you think you’re capable of? At Coinbase, our mission is to increase economic freedom in the world. It’s a massive, ambitious opportunity that demands the best of us, every day, as we build the emerging onchain platform — and with it, the future global financial system. To achieve our mission, we’re seeking a very specific candidate. We want someone who is passionate about our mission and who believes in the power of crypto and blockchain technology to update the financial system. We want someone who is eager to leave their mark on the world, who relishes the pressure and privilege of working with high caliber colleagues, and who actively seeks feedback to keep leveling up. We want someone who will run towards, not away from, solving the company’s hardest problems. Our work culture is intense and isn’t for everyone. But if you want to build the future alongside others who excel in their disciplines and expect the same from you, there’s no better place to be. While many roles at Coinbase are remote-first, we are not remote-only. In-person participation is required throughout the year. Team and company-wide offsites are held multiple times annually to foster collaboration, connection, and alignment. Attendance is expected and fully supported. Team Coinbase is seeking a software engineer to join our India pod to drive the launch and growth of Coinbase in India. You will solve unique, large-scale, highly complex technical problems. You will help build the next generation of systems to make cryptocurrency accessible to everyone across multiple platforms (web, iOS, Android), operating real-time applications with high frequency and low latency updates, keeping the platform safe from fraud, enabling delightful experiences, and managing the most secure, containerized infrastructure running in the cloud. What you’ll be doing (i.e., job duties): Build high-performance services using Golang and gRPC, creating seamless integrations that elevate Coinbase's customer experience. Adopt, learn, and drive best practices in design techniques, coding, testing, documentation, monitoring, and alerting. Demonstrate a keen awareness of Coinbase’s platform, development practices, and various technical domains, and build upon them to efficiently deliver improvements across multiple teams. Add positive energy in every meeting and make your coworkers feel included in every interaction. Communicate across the company to both technical and non-technical leaders with ease. Deliver top-quality services in a tight timeframe by navigating seamlessly through uncertainties. Work with teams and teammates across multiple time zones. What we look for in you (i.e., job requirements): 3+ years of experience as a software engineer and 1+ years building backend services using Golang and gRPC. A self-starter capable of executing complex solutions with minimal guidance while ensuring efficiency and scalability. Proven experience integrating at least two third-party applications using Golang. Hands-on experience with AWS, Kubernetes, Terraform, Buildkite, or similar cloud infrastructure tools. Working knowledge of event-driven architectures (Kafka, MQ, etc.) and hands-on experience with SQL or NoSQL databases. Good understanding of gRPC, GraphQL, ETL pipelines, and modern development practices. Nice to haves: SaaS platform experience (Salesforce, Amazon Connect, Sprinklr). Experience with AWS, Kubernetes, Terraform, GitHub Actions, or similar tools. Familiarity with rate limiters, caching, metrics, logging, and debugging. Req ID - GCBE04IN Please be advised that each candidate may submit a maximum of four applications within any 30-day period. We encourage you to carefully evaluate how your skills and interests align with Coinbase's roles before applying. Commitment to Equal Opportunity Coinbase is committed to diversity in its workforce and is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, creed, gender, national origin, age, disability, veteran status, sex, gender expression or identity, sexual orientation or any other basis protected by applicable law. Coinbase will also consider for employment qualified applicants with criminal histories in a manner consistent with applicable federal, state and local law. For US applicants, you may view the Know Your Rights notice here . Additionally, Coinbase participates in the E-Verify program in certain locations, as required by law. Coinbase is also committed to providing reasonable accommodations to individuals with disabilities. If you need a reasonable accommodation because of a disability for any part of the employment process, please contact us at accommodations[at]coinbase.com to let us know the nature of your request and your contact information. For quick access to screen reading technology compatible with this site click here to download a free compatible screen reader (free step by step tutorial can be found here) . Global Data Privacy Notice for Job Candidates and Applicants Depending on your location, the General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA) may regulate the way we manage the data of job applicants. Our full notice outlining how data will be processed as part of the application procedure for applicable locations is available here. By submitting your application, you are agreeing to our use and processing of your data as required. For US applicants only, by submitting your application you are agreeing to arbitration of disputes as outlined here. Show more Show less

Posted 1 day ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Summary: We are seeking a highly skilled and experienced Data Scientist with a deep understanding of data analytics powered by artificial intelligence (AI) tools. The ideal candidate will be passionate about turning data into actionable insights using cutting-edge AI platforms, automation techniques, and advanced statistical methods. Key Responsibilities: Develop and deploy scalable AI-powered data analytics solutions for business intelligence, forecasting, and optimization. Leverage AI tools to automate data cleansing, feature engineering, model building, and visualization. Design and conduct advanced statistical analyses and machine learning models (supervised, unsupervised, NLP, etc.). Collaborate cross-functionally with engineering and business teams to drive data-first decision-making. Must-Have Skills & Qualifications: Minimum 4 years of professional experience in data science, analytics, or a related field. Proficiency in Python and/or R with strong hands-on experience in ML libraries (scikit-learn, XGBoost, TensorFlow, etc.). Expert knowledge of SQL and working with relational databases. Proven experience with data wrangling, data pipelines, and ETL processes. Deep Understanding of AI Tools for Data Analytics (Experience with several of the following is required): Data Preparation & Automation: Alteryx, Trifacta, KNIME AI/ML Platforms: DataRobot, H2O.ai, Amazon SageMaker, Azure ML Studio, Google Vertex AI Visualization & BI: Tableau, Power BI, Looker (with AI/ML integrations) AutoML & Predictive Modeling: Google AutoML, IBM Watson Studio, BigML NLP & Text Analytics: OpenAI (ChatGPT, Codex APIs), Hugging Face Transformers, MonkeyLearn Workflow Orchestration: Apache Airflow, Prefect Preferred Qualifications: Degree in Computer Science, Data Science, Statistics, or related field. Experience in cloud-based environments (AWS, GCP, Azure) for ML workloads. To apply, please send your resume to sooraj@superpe.in or shreya@superpe.in SuperPe is an equal opportunity employer and welcomes candidates of all backgrounds to apply. We look forward to hearing from you! Show more Show less

Posted 1 day ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Summary As a Java Developer for the Data and Analytics team, you will work within a Professional Services team to support our customer’s data migrations from legacy systems to Guidewire Cloud. You will also support the development of new tooling and methodology to streamline our migration process. Job Description You will work with our customers, partners, and other Guidewire team members to deliver successful migration programs utilizing our custom migration tools. You will utilize best practices for design, development and delivery of customer projects. You will share knowledge with the wider Guidewire Data and Analytics teams. One of our principles is to have fun while we deliver, so this role will need to keep the delivery process fun and engaging for the team in collaboration with the broader organization. Given the dynamic nature of the work in the Data and Analytics team, we are looking for decisive, highly-skilled technical problem solvers who can bring their array of experience working in previous Migration roles. You will cooperate closely with teams located around the world. Key Responsibilities You will deliver data migration projects for our customers accurately and on time You will work with the broader Guidewire data team to improve our internal processes and methodology You will participate in the creation of new tooling to support and streamline our data migration projects when called upon or when the opportunity presents itself You are a systematic problem-solver who takes ownership of your projects and does not shy away from the hard problems. You are driven to success and accept nothing less from yourself. You consistently display the ability to work independently in a fast-paced Agile environment. Flexibility to do shift work as needed (aligning to AMER/APAC colleagues/customers). Qualifications Bachelor’s or Master’s Degree in Computer Science, or equivalent level of demonstrable professional competency, and 3-5 years + in delivery type role Development experience using Java (or other Object-Oriented language) preferred Experience developing and deploying production REST APIs Familiarity with data processing and ETL (Extract, Transform, Load) concepts. Experience working with relational and/or NoSQL databases Proficiency in SQL, Data Modeling, ETL/ELT, and cloud computing skills. Experience working with customer teams to understand business objectives and functional requirements. Effective leadership, interpersonal, and communication skills. Ability to work independently and within a team. Nice To Have Insurance industry experience Experience with the Guidewire InsuranceSuite Guidewire ACE Certification Experience in Data Migration About Guidewire Guidewire is the platform P&C insurers trust to engage, innovate, and grow efficiently. We combine digital, core, analytics, and AI to deliver our platform as a cloud service. More than 540+ insurers in 40 countries, from new ventures to the largest and most complex in the world, run on Guidewire. As a partner to our customers, we continually evolve to enable their success. We are proud of our unparalleled implementation track record with 1600+ successful projects, supported by the largest R&D team and partner ecosystem in the industry. Our Marketplace provides hundreds of applications that accelerate integration, localization, and innovation. For more information, please visit www.guidewire.com and follow us on Twitter: @Guidewire_PandC. Guidewire Software, Inc. is proud to be an equal opportunity and affirmative action employer. We are committed to an inclusive workplace, and believe that a diversity of perspectives, abilities, and cultures is a key to our success. Qualified applicants will receive consideration without regard to race, color, ancestry, religion, sex, national origin, citizenship, marital status, age, sexual orientation, gender identity, gender expression, veteran status, or disability. All offers are contingent upon passing a criminal history and other background checks where it's applicable to the position. Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

Overview Primary focus would be to perform development work within Azure Data Lake environment and other related ETL technologies, with the responsibility of ensuring on time and on budget delivery; Satisfying project requirements, while adhering to enterprise architecture standards. This role will also have L3 responsibilities for ETL processes Responsibilities Delivery of key Azure Data Lake projects within time and budget Contribute to solution design and build to ensure scalability, performance and reuse of data and other components Delivery of key Azure Data Lake projects within time and budget Ensure on time and on budget delivery which satisfies project requirements, while adhering to enterprise architecture standards. Possess strong problem-solving abilities with a focus on managing to business outcomes through collaboration with multiple internal and external parties Enthusiastic, willing, able to learn and continuously develop skills and techniques - enjoys change and seeks continuous improvement A clear communicator both written and verbal with good presentational skills, fluent and proficient in the English language Customer focused and a team player Qualifications Bachelor’s degree in Computer Science, MIS, Business Management, or related field 5+ years’ experience in Information Technology 4+ years’ experience in Azure Data Lake Bachelor’s degree in Computer Science, MIS, Business Management, or related field Technical Skills : Proven experience development activities in Data, BI or Analytics projects Solutions Delivery experience - knowledge of system development lifecycle, integration, and sustainability Strong knowledge of Pyspark and SQL Good knowledge of Azure data factory or Databricks Knowledge of Presto / Denodo is desirable Knowledge of FMCG business processes is desirable Non-Technical Skills : Excellent remote collaboration skills Experience working in a matrix organization with diverse priorities Exceptional written and verbal communication skills along with collaboration and listening skills Ability to work with agile delivery methodologies Ability to ideate requirements & design iteratively with business partners without formal requirements documentation Show more Show less

Posted 1 day ago

Apply

3.0 - 6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About The Role Grade Level (for internal use): 09 The Role: As a Software Developer with the Data & Research Development team, you will be responsible for developing & providing backend support across a variety of products within the Market Intelligence platform. Together, you will build scalable and robust solutions using AGILE development methodologies with a focus on high availability to end users. The Team: Do you love to collaborate & provide solutions? This team comes together across eight different locations every single day to craft enterprise grade applications that serve a large customer base with growing demand and usage. You will use a wide range of technologies and cultivate a collaborative environment with other internal teams. The Impact: We focus primarily developing, enhancing and delivering required pieces of information & functionality to internal & external clients in all client-facing applications. You will have a highly visible role where even small changes have very wide impact. What’s in it for you? Opportunities for innovation and learning new state of the art technologies To work in pure agile & scrum methodology Responsibilities Deliver solutions within a multi-functional Agile team Develop expertise in our proprietary enterprise software products Set and maintain a level of excitement in using various technologies to develop, support, and iteratively deploy real enterprise level software Achieve an understanding of customer environments and their use of the products Build solutions architecture, algorithms, and designs for solutions that scale to the customer's enterprise/global requirements Apply software engineering practices and implement automation across all elements of solution delivery Basic Qualifications What we’re looking for: 3-6 years of desktop application development experience with deep understanding of Design Patterns & Object-oriented programming. Hands on development experience using C#, .Net 4.0/4.5, WPF, Asp.net, SQL server. Strong OOP and Service Oriented Architecture (SOA) knowledge. Strong understanding of cloud applications (Containers, Dockers etc.) and exposure to data ETL will be a plus. Ability to resolve serious performance related issues through various techniques, including testing, debugging and profiling. Strong problem solving, analytical and communication skills. Possess a true “roll up the sleeves and get it done” working approach; demonstrated success as a problem solver, operating as a client-focused self-starter. Preferred Qualifications Bachelor's degree in computer science or computer engineering About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence. What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 313152 Posted On: 2025-05-05 Location: Hyderabad, Telangana, India Show more Show less

Posted 1 day ago

Apply

12.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

Overview Deputy Director - Data Engineering PepsiCo operates in an environment undergoing immense and rapid change. Big-data and digital technologies are driving business transformation that is unlocking new capabilities and business innovations in areas like eCommerce, mobile experiences and IoT. The key to winning in these areas is being able to leverage enterprise data foundations built on PepsiCo’s global business scale to enable business insights, advanced analytics, and new product development. PepsiCo’s Data Management and Operations team is tasked with the responsibility of developing quality data collection processes, maintaining the integrity of our data foundations, and enabling business leaders and data scientists across the company to have rapid access to the data they need for decision-making and innovation. What PepsiCo Data Management and Operations does: Maintain a predictable, transparent, global operating rhythm that ensures always-on access to high-quality data for stakeholders across the company. Responsible for day-to-day data collection, transportation, maintenance/curation, and access to the PepsiCo corporate data asset Work cross-functionally across the enterprise to centralize data and standardize it for use by business, data science or other stakeholders. Increase awareness about available data and democratize access to it across the company. As a data engineering lead, you will be the key technical expert overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be empowered to create & lead a strong team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help lead the development of very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. Responsibilities Data engineering lead role for D&Ai data modernization (MDIP) Ideally Candidate must be flexible to work an alternative schedule either on tradition work week from Monday to Friday; or Tuesday to Saturday or Sunday to Thursday depending upon coverage requirements of the job. The can didate can work with immediate supervisor to change the work schedule on rotational basis depending on the product and project requirements. Responsibilities Manage a team of data engineers and data analysts by delegating project responsibilities and managing their flow of work as well as empowering them to realize their full potential. Design, structure and store data into unified data models and link them together to make the data reusable for downstream products. Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products. Create reusable accelerators and solutions to migrate data from legacy data warehouse platforms such as Teradata to Azure Databricks and Azure SQL. Enable and accelerate standards-based development prioritizing reuse of code, adopt test-driven development, unit testing and test automation with end-to-end observability of data Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for data pipeline quality, performance and cost. Collaborate with internal clients (product teams, sector leads, data science teams) and external partners (SI partners/data providers) to drive solutioning and clarify solution requirements. Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects to build and support the right domain architecture for each application following well-architected design standards. Define and manage SLA’s for data products and processes running in production. Create documentation for learnings and knowledge transfer to internal associates. Qualifications 12+ years of engineering and data management experience Qualifications 12+ years of overall technology experience that includes at least 5+ years of hands-on software development, data engineering, and systems architecture. 8+ years of experience with Data Lakehouse, Data Warehousing, and Data Analytics tools. 6+ years of experience in SQL optimization and performance tuning on MS SQL Server, Azure SQL or any other popular RDBMS 6+ years of experience in Python/Pyspark/Scala programming on big data platforms like Databricks 4+ years in cloud data engineering experience in Azure or AWS. Fluent with Azure cloud services. Azure Data Engineering certification is a plus. Experience with integration of multi cloud services with on-premises technologies. Experience with data modelling, data warehousing, and building high-volume ETL/ELT pipelines. Experience with data profiling and data quality tools like Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one business intelligence tool such as Power BI or Tableau Experience with running and scaling applications on the cloud infrastructure and containerized services like Kubernetes. Experience with version control systems like ADO, Github and CI/CD tools for DevOps automation and deployments. Experience with Azure Data Factory, Azure Databricks and Azure Machine learning tools. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus. Understanding of metadata management, data lineage, and data glossaries is a plus. BA/BS in Computer Science, Math, Physics, or other technical fields. Candidate must be flexible to work an alternative work schedule either on tradition work week from Monday to Friday; or Tuesday to Saturday or Sunday to Thursday depending upon product and project coverage requirements of the job. Candidates are expected to be in the office at the assigned location at least 3 days a week and the days at work needs to be coordinated with immediate supervisor Skills, Abilities, Knowledge: Excellent communication skills, both verbal and written, along with the ability to influence and demonstrate confidence in communications with senior level management. Proven track record of leading, mentoring data teams. Strong change manager. Comfortable with change, especially that which arises through company growth. Ability to understand and translate business requirements into data and technical requirements. High degree of organization and ability to manage multiple, competing projects and priorities simultaneously. Positive and flexible attitude to enable adjusting to different needs in an ever-changing environment. Strong leadership, organizational and interpersonal skills; comfortable managing trade-offs. Foster a team culture of accountability, communication, and self-management. Proactively drives impact and engagement while bringing others along. Consistently attain/exceed individual and team goals. Ability to lead others without direct authority in a matrixed environment. Comfortable working in a hybrid environment with teams consisting of contractors as well as FTEs spread across multiple PepsiCo locations. Domain Knowledge in CPG industry with Supply chain/GTM background is preferred. Show more Show less

Posted 1 day ago

Apply

15.0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Introduction Joining the IBM Technology Expert Labs teams means you'll have a career delivering world-class services for our clients. As the ultimate expert in IBM products, you'll bring together all the necessary technology and services to help customers solve their most challenging problems. Working in IBM Technology Expert Labs means accelerating the time to value confidently and ensuring speed and insight while our clients focus on what they do best—running and growing their business. Excellent onboarding and industry-leading learning culture will set you up for a positive impact, while advancing your career. Our culture is collaborative and experiential. As part of a team, you will be surrounded by bright minds and keen co-creators—always willing to help and be helped—as you apply passion to work that will positively impact the world around us. Your Role And Responsibilities This Candidate is responsible for: DB2 installation and configuration on the below environments. On Prem Multi Cloud Redhat Open shift Cluster HADR Non-DPF and DPF. Migration of other databases to Db2(eg TERADATA / SNOWFLAKE / SAP/ Cloudera to db2 migration) Create high-level designs, detail level designs, maintaining product roadmaps which includes both modernization and leveraging cloud solutions Design scalable, performant, and cost-effective data architectures within the Lakehouse to support diverse workloads, including reporting, analytics, data science, and AI/ML. Perform health check of the databases, make recommendations and deliver tuning. At the Database and system level. Deploy DB2 databases as containers within Red Hat OpenShift clusters Configure containerized database instances, persistent storage, and network settings to optimize performance and reliability. Lead the architectural design and implementation of solutions on IBM watsonx.data, ensuring alignment with overall enterprise data strategy and business objectives. Define and optimize the watsonx.data ecosystem, including integration with other IBM watsonx components (watsonx.ai, watsonx.governance) and existing data infrastructure (DB2, Netezza, cloud data sources) Establish best practices for data modeling, schema evolution, and data organization within the watsonx.data lakehouse Act as a subject matter expert on Lakehouse architecture, providing technical leadership and guidance to data engineering, analytics, and development teams. Mentor junior architects and engineers, fostering their growth and knowledge in modern data platforms. Participate in the development of architecture governance processes and promote best practices across the organization. Communicate complex technical concepts to both technical and non-technical stakeholders. Required Technical And Professional Expertise 15+ years of experience in data architecture, data engineering, or a similar role, with significant hands-on experience in cloud data platforms Strong proficiency in DB2, SQL and Python. Strong understanding of: Database design and modelling(dimensional, normalized, NoSQL schemas) Normalization and indexing Data warehousing and ETL processes Cloud platforms (AWS, Azure, GCP) Big data technologies (e.g., Hadoop, Spark) Database Migration project experience from one database to another database (target database Db2). Experience in deployment of DB2 databases as containers within Red Hat OpenShift clusters and configure containerized database instances, persistent storage, and network settings to optimize performance and reliability. Excellent communication, collaboration, problem-solving, and leadership skills. Preferred Technical And Professional Experience Experience with machine learning environments and LLMs Certification in IBM watsonx.data or related IBM data and AI technologies Hands-on experience with Lakehouse platform (e.g., Databricks, Snowflake) Having exposure to implement or understanding of DB replication process Experience with integrating watsonx.data with GenAI or LLM initiatives (e.g., RAG architectures). Experience with NoSQL databases (e.g., MongoDB, Cassandra). Experience in data modeling tools (e.g., ER/Studio, ERwin). Knowledge of data governance and compliance standards (e.g., GDPR, HIPAA). soft skills. Show more Show less

Posted 1 day ago

Apply

Exploring ETL Jobs in India

The ETL (Extract, Transform, Load) job market in India is thriving with numerous opportunities for job seekers. ETL professionals play a crucial role in managing and analyzing data effectively for organizations across various industries. If you are considering a career in ETL, this article will provide you with valuable insights into the job market in India.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Pune
  4. Hyderabad
  5. Chennai

These cities are known for their thriving tech industries and often have a high demand for ETL professionals.

Average Salary Range

The average salary range for ETL professionals in India varies based on experience levels. Entry-level positions typically start at around ₹3-5 lakhs per annum, while experienced professionals can earn upwards of ₹10-15 lakhs per annum.

Career Path

In the ETL field, a typical career path may include roles such as: - Junior ETL Developer - ETL Developer - Senior ETL Developer - ETL Tech Lead - ETL Architect

As you gain experience and expertise, you can progress to higher-level roles within the ETL domain.

Related Skills

Alongside ETL, professionals in this field are often expected to have skills in: - SQL - Data Warehousing - Data Modeling - ETL Tools (e.g., Informatica, Talend) - Database Management Systems (e.g., Oracle, SQL Server)

Having a strong foundation in these related skills can enhance your capabilities as an ETL professional.

Interview Questions

Here are 25 interview questions that you may encounter in ETL job interviews:

  • What is ETL and why is it important? (basic)
  • Explain the difference between ETL and ELT processes. (medium)
  • How do you handle incremental loads in ETL processes? (medium)
  • What is a surrogate key in the context of ETL? (basic)
  • Can you explain the concept of data profiling in ETL? (medium)
  • How do you handle data quality issues in ETL processes? (medium)
  • What are some common ETL tools you have worked with? (basic)
  • Explain the difference between a full load and an incremental load. (basic)
  • How do you optimize ETL processes for performance? (medium)
  • Can you describe a challenging ETL project you worked on and how you overcame obstacles? (advanced)
  • What is the significance of data cleansing in ETL? (basic)
  • How do you ensure data security and compliance in ETL processes? (medium)
  • Have you worked with real-time data integration in ETL? If so, how did you approach it? (advanced)
  • What are the key components of an ETL architecture? (basic)
  • How do you handle data transformation requirements in ETL processes? (medium)
  • What are some best practices for ETL development? (medium)
  • Can you explain the concept of change data capture in ETL? (medium)
  • How do you troubleshoot ETL job failures? (medium)
  • What role does metadata play in ETL processes? (basic)
  • How do you handle complex transformations in ETL processes? (medium)
  • What is the importance of data lineage in ETL? (basic)
  • Have you worked with parallel processing in ETL? If so, explain your experience. (advanced)
  • How do you ensure data consistency across different ETL jobs? (medium)
  • Can you explain the concept of slowly changing dimensions in ETL? (medium)
  • How do you document ETL processes for knowledge sharing and future reference? (basic)

Closing Remarks

As you explore ETL jobs in India, remember to showcase your skills and expertise confidently during interviews. With the right preparation and a solid understanding of ETL concepts, you can embark on a rewarding career in this dynamic field. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies