Jobs
Interviews

1529 Talend Jobs - Page 29

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 9.0 years

14 - 18 Lacs

Bengaluru

Work from Office

About the Role: Platform Product Owner Data Pipelines Were looking for a product-driven, data-savvy Platform Product Owner to lead the evolution of Hevos Data Pipelines Platform. This role blends strategic product thinking with operational excellence and offers full ownershipfrom defining product outcomes to driving delivery health and platform reliability. Youll work closely with Engineering, Architecture, and cross-functional teams to shape the platform roadmap, define user value, and ensure successful outcomes through measurable impact. If you're passionate about building scalable, high-impact data productsand excel at balancing strategy with executionthis role is for you. Key Responsibilities: Product Ownership & Strategy Define and evolve the product vision and roadmap in collaboration with Product Leadership. Translate vision into a value-driven, structured product backlog focused on scalability, reliability, and user outcomes. Craft clear user stories with well-defined acceptance criteria and success metrics. Partner with Engineering and Architecture to design and iterate on platform capabilities aligned with long-term strategy. Analyze competitive products to identify experience gaps, technical differentiators, and new opportunities. Ensure platform capabilities deliver consistent value to internal teams and end users. Product Operations & Delivery Insights Define and track key product health metrics (e.g., uptime, throughput, SLA adherence, adoption). Foster a metrics-first culture in product deliveryensuring every backlog item ties to measurable outcomes. Triage bugs and feature requests, assess impact, and feed insights into prioritization and planning. Define post-release success metrics and establish feedback loops to evaluate feature adoption and performance. Build dashboards and reporting frameworks to increase visibility into product readiness, velocity, and operations. Improve practices around backlog hygiene, estimation accuracy, and story lifecycle management. Ensure excellence in release planning and launch execution to meet quality and scalability benchmarks. Collaboration & Communication Champion the product vision and user needs across all stages of development. Collaborate with Support, Customer Success, and Product Marketing to ensure customer insights inform product direction. Develop enablement materials (e.g., internal walkthroughs, release notes) to support go-to-market and support teams. Drive alignment and accountability throughout the product lifecyclefrom planning to post-release evaluation. Qualifications: Required Bachelors degree in Computer Science or a related engineering field. 5+ years of experience as a Product Manager/Product Owner, with time spent on platform/infrastructure products at B2B startups. Hands-on experience with ETL tools or modern data platforms (e.g., Talend, Informatica, AWS Glue, Snowflake, BigQuery, Redshift, Databricks). Strong understanding of the product lifecycle with an operations-focused mindset. Proven ability to collaborate with engineering teams to build scalable, reliable features. Familiarity with data integration, APIs, connectors, and streaming/real-time data pipelines. Analytical mindset with experience tracking KPIs and making data-informed decisions. Excellent communication and cross-functional collaboration skills. Proficiency with agile product development tools (e.g., Jira, Aha!, Linear). Preferred Experience in a data-intensive environment. Engineer-turned-Product Manager with a hands-on technical background. MBA from a Tier-1 institute.

Posted 1 month ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. At our core, we are dedicated to enriching lives by bridging the gap between individuals and premium wireless experiences that not only meet but exceed expectations in value and quality. We believe that everyone deserves access to seamless, reliable, and affordable wireless solutions that enhance their day-to-day lives, connecting them to what matters most. By joining our team, you'll play a pivotal role in this mission, working towards delivering innovative, customer-focused solutions that open up a world of possibilities. We're not just in the business of technology; we're in the business of connecting people, empowering them to explore, share, and engage with the world around them in ways they never thought possible. Building on our commitment to connect people with quality experiences that offer the best value in wireless, let's delve deeper into how we strategically position our diverse portfolio to cater to a broad spectrum of needs and preferences. Our portfolio, comprising 11 distinct brands, is meticulously organized into five families, each designed to address specific market segments and distribution channels to maximize reach and impact. Total by Verizon & Verizon Prepaid: At the forefront, we have Total by Verizon and Verizon Prepaid, our flagship brands available at Verizon exclusive and/or national/retail stores. Verizon Prepaid continues to maintain a robust and loyal consumer base, while Total by Verizon is on a rapid ascent, capturing the hearts of more customers with its compelling offerings. Straight Talk, TracFone, and Walmart Family Mobile: Straight Talk, Tracfone, and Walmart Family Mobile stand as giants in our brand portfolio, boasting significant presence in Walmart. Their extensive reach and solidified position in the market underscore our commitment to accessible, high-quality wireless solutions across diverse retail environments. Visible: Visible, as a standalone brand family, caters to the digitally-savvy, single-line customers who prefer streamlined, online-first interactions. This brand is a testament to our adaptability, embracing the digital evolution of customer engagement. Simple Mobile: Carving out a niche of its own, Simple Mobile shines as the premier choice among authorized resellers. Its consistent recognition as the most carried brand in Wave7 Research’s prepaid dealer survey for 36 consecutive quarters speaks volumes about its popularity and reliability. SafeLink: SafeLink remains dedicated to serving customers through government subsidies. With a strategic pivot towards Lifeline in the absence of ACP, SafeLink continues to fulfill its mission of providing essential communication services to those in need. Join the team that connects people with quality experiences that give them the best value in wireless. What You’ll Be Doing Identifying macro trends, explain drivers behind favorability/unfavorability to targets and help build narratives around disconnect/revenue performance. Responsible for assisting in how the Value Base Management tracks and forecast customer behavior that will result in a prepaid phone disconnect across all Brands as well as Revenue growth. Contributing to the production of Value Disconnects forecast for Best View (monthly), Line-ofSight (weekly), Outlook (quarterly), and Long Range Plan (annually). Contributing to the production of Revenue Growth forecast, Step Rations, Plan Mix, Add on revenue. Developing and streamlining consolidation of forecast models to produce executive friendly slides focused on disconnect drivers Familiarizing yourself with Value’s data infrastructure and various data reporting source the team leverages. Tracking and investigate actual vs forecast variances to determine variance driver. Iterating on process improvements and automations that help us become more nimble, proficient, and data-driven in our decision-making. Integrating data from multiple sources into timely, accessible, and relevant reports, ensuring data quality and reliability. Assisting with ad-hoc projects / requests from senior leaders. What We’re Looking For... You’ll need to have: Bachelor's degree or four or more years of work experience. Four or more years of relevant work experience. Experience working with complex data structures. Experience utilizing query tools such as SQL. Experience with forecasting data visualization tools such as Tableau, Qlik, Looker. Experience with streamline and automation in Google Sheets and Microsoft Excel. Experience with large data sets in Google Sheets and Microsoft Excel. Data Analytics Experience. Even better if you have one or more of the following: Advanced MS Excel skills with a deep understanding of model architecture, formula efficiency, pivot tables, and macros. Experience with forecasting data visualization tools such as Tableau, Qlik, Looker. Experience with ETL Tools (Knime, Talend, SSIS, etc). Data Mining experience. Data Modeling experience. Data Science background. #VALUESNONCDIO Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics.

Posted 1 month ago

Apply

3.0 - 6.0 years

7 - 11 Lacs

Bengaluru

Work from Office

We are looking for a skilled Data Engineer with 3 to 6 years of experience in processing data pipelines using Databricks, PySpark, and SQL on Cloud distributions like AWS. The ideal candidate should have hands-on experience with Databricks, Spark, SQL, and AWS Cloud platform, especially S3, EMR, Databricks, Cloudera, etc. Roles and Responsibility Design and develop large-scale data pipelines using Databricks, Spark, and SQL. Optimize data operations using Databricks and Python. Develop solutions to meet business needs reflecting a clear understanding of the objectives, practices, and procedures of the corporation, department, and business unit. Evaluate alternative risks and solutions before taking action. Utilize all available resources efficiently. Collaborate with cross-functional teams to achieve business goals. Job Experience working in projects involving data engineering and processing. Proficiency in large-scale data operations using Databricks and overall comfort with Python. Familiarity with AWS compute, storage, and IAM concepts. Experience with S3 Data Lake as the storage tier. ETL background with Talend or AWS Glue is a plus. Cloud Warehouse experience with Snowflake is a huge plus. Strong analytical and problem-solving skills. Relevant experience with ETL methods and retrieving data from dimensional data models and data warehouses. Strong experience with relational databases and data access methods, especially SQL. Excellent collaboration and cross-functional leadership skills. Excellent communication skills, both written and verbal. Ability to manage multiple initiatives and priorities in a fast-paced, collaborative environment. Ability to leverage data assets to respond to complex questions that require timely answers. Working knowledge of migrating relational and dimensional databases on AWS Cloud platform.

Posted 1 month ago

Apply

5.0 - 7.0 years

4 - 7 Lacs

Noida

Work from Office

We are looking for a skilled Informatica MDM professional with 5 to 7 years of experience. The ideal candidate will have expertise in defining data models and architectures, configuring MDM solutions, and designing and developing BES UI. Roles and Responsibility Define data models and architecture for MDM solutions. Configure MDM (Base Objects, Stg tables, Match & Merge rules, Hierarchies, Relationship objects). Design and develop BES UI. Design and develop C360 applications for data stewards according to client needs. Define data migration processes from legacy systems during M&A activities to MDM systems. Support and maintain MDM applications. Job Minimum 5 years of experience in Informatica MDM. Strong knowledge of data modeling and architecture. Experience in configuring MDM solutions and designing BES UI. Ability to define data migration processes. Strong understanding of data stewardship concepts. Excellent problem-solving skills and attention to detail.

Posted 1 month ago

Apply

2.0 - 5.0 years

8 - 12 Lacs

Bengaluru

Work from Office

The Data Engineer will report to the Data Engineering Manager and play a crucial role in designing, building, and maintaining scalable data pipelines within Kaseya You will be responsible for ensuring data is readily available, accurate, and optimized for analytics and strategic decision-making Required Qualifications: Bachelors degree (or equivalent) in Computer Science, Engineering, or related field 2+ years of experience in data engineering or related role Proficient in SQL and at least one programming language (Python, Scala, or Java) Hands-on experience with data integration/ETL tools (e g, Matillion, Talend, Airflow) Familiarity with modern cloud data warehouses (Snowflake, Redshift, or BigQuery) Strong problem-solving skills and attention to detail Excellent communication and team collaboration skills Ability to work in a fast-paced, high-growth environment Roles & Responsibilities: Design and Develop ETL Pipelines: Create high-performance data ingestion and transformation processes, leveraging tools like Matillion, Airflow, or similar Implement Data Lake and Warehouse Solutions: Develop and optimize data warehouses/lakes (Snowflake, Redshift, BigQuery, or Databricks), ensuring best-in-class performance Optimize Query Performance: Continuously refine queries and storage strategies to support large volumes of data and multiple use cases Ensure Data Governance & Security: Collaborate with the Data Governance team to ensure compliance with privacy regulations and corporate data policies Troubleshoot Complex Data Issues: Investigate and resolve bottlenecks, data quality problems, and system performance challenges Document Processes & Standards: Maintain clear documentation on data pipelines, schemas, and operational processes to facilitate knowledge sharing Collaborate with Analytics Teams: Work with BI, Data Science, and Business Analyst teams to deliver timely, reliable, and enriched datasets for reporting and advanced analytics Evaluate Emerging Technologies: Stay informed about the latest tools, frameworks, and methodologies, recommending improvements where applicable

Posted 1 month ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

We are seeking a skilled Data Engineer with over 5+ years of experience to design, build, and maintain scalable data pipelines and perform advanced data analysis to support business intelligence and data-driven decision-making. The ideal candidate will have a strong foundation in computer science principles, extensive experience with SQL and big data tools, and proficiency in cloud platforms and data visualization tools. Key Responsibilities: Design, develop, and maintain robust, scalable ETL pipelines using Apache Airflow, DBT, Composer (GCP), Control-M, Cron, Luigi, and similar tools. Build and optimize data architectures including data lakes and data warehouses. Integrate data from multiple sources ensuring data quality and consistency. Collaborate with data scientists, analysts, and stakeholders to translate business requirements into technical solutions. Analyze complex datasets to identify trends, generate actionable insights, and support decision-making. Develop and maintain dashboards and reports using Tableau, Power BI, and Jupyter Notebooks for visualization and pipeline validation. Manage and optimize relational and NoSQL databases such as MySQL, PostgreSQL, Oracle, MongoDB, and DynamoDB. Work with big data tools and frameworks including Hadoop, Spark, Hive, Kafka, Informatica, Talend, SSIS, and Dataflow. Utilize cloud data services and warehouses like AWS Glue, GCP Dataflow, Azure Data Factory, Snowflake, Redshift, and BigQuery. Support CI/CD pipelines and DevOps workflows using Git, Docker, Terraform, and related tools. Ensure data governance, security, and compliance standards are met. Participate in Agile and DevOps processes to enhance data engineering workflows. Required Qualifications: 5+ years of professional experience in data engineering and data analysis roles. Strong proficiency in SQL and experience with database management systems such as MySQL, PostgreSQL, Oracle, and MongoDB. Hands-on experience with big data tools like Hadoop and Apache Spark. Proficient in Python programming. Experience with data visualization tools such as Tableau, Power BI, and Jupyter Notebooks. Proven ability to design, build, and maintain scalable ETL pipelines using tools like Apache Airflow, DBT, Composer (GCP), Control-M, Cron, and Luigi. Familiarity with data engineering tools including Hive, Kafka, Informatica, Talend, SSIS, and Dataflow. Experience working with cloud data warehouses and services (Snowflake, Redshift, BigQuery, AWS Glue, GCP Dataflow, Azure Data Factory). Understanding of data modeling concepts and data lake/data warehouse architectures. Experience supporting CI/CD practices with Git, Docker, Terraform, and DevOps workflows. Knowledge of both relational and NoSQL databases, including PostgreSQL, BigQuery, MongoDB, and DynamoDB. Exposure to Agile and DevOps methodologies. Experience with Amazon Web Services (S3, Glue, Redshift, Lambda, Athena) Preferred Skills: Strong problem-solving and communication skills. Ability to work independently and collaboratively in a team environment. Experience with service development, REST APIs, and automation testing is a plus. Familiarity with version control systems and workflow automation.

Posted 1 month ago

Apply

0 years

5 - 8 Lacs

Mumbai

On-site

Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose – the relentless pursuit of a world that works better for people – we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead C onsultant, Talend Developer! Responsibilities Strong written and oral communication skills are essential. Strong analytical skills and ability to resolve problems are desired. Ability to work independently and multi-task to meet critical deadlines in a rapidly changing environment Be a senior technical player on complex ETL development projects with multiple team members Lead the creation of all design review artifacts during project design phase, facilitate design reviews, capture review feedback and schedule additional detailed design sessions as necessary. Create and enhance administrative, operational and technical policies and procedures, adopting best practice guidelines, standards and procedures Qualifications we seek in you! Minimum Qualifications Relevant Experience required . Senior Level Talend ETL development (hard-core Talend experience) using Talend Data Studio on Cloud Experience on Any Database like Snowflake, SQL Server and exposure to Azure cloud Experience of using different sources like XLS, XML, Streams Strong experience in advanced SQL programming Strong experience in Data Quality / Data Profiling, Source Systems Analysis, Business Rules Validation, Source Target Mapping Design, Performance Tuning and High Volume Data Loads. Expertise in theoretical and practical knowledge of data warehousing and data modeling Solid and professional communications skills, both verbal and written Strong knowledge of Software Development Lifecycle (SDLC). Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Lead Consultant Primary Location India-Mumbai Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jun 29, 2025, 11:04:27 PM Unposting Date Dec 27, 2025, 3:04:27 AM Master Skills List Digital Job Category Full Time

Posted 1 month ago

Apply

9.0 - 12.0 years

9 Lacs

Chennai

On-site

9 - 12 Years 1 Opening Bangalore, Chennai, Kochi, Trivandrum Role description Role Proficiency: JD – Data Lead We are looking for an experienced Data Lead to oversee and manage data migration projects, ensuring smooth and accurate transfer of data between systems. Responsibilities: Lead end-to-end data migration activities including planning, extraction, transformation, loading (ETL), and validation. Collaborate with cross-functional teams to understand data requirements and migration scope. Design and implement data migration strategies, processes, and best practices. Ensure data integrity, accuracy, and quality throughout migration. Manage timelines, risks, and resource allocation for migration projects. Troubleshoot and resolve data migration issues. Provide technical guidance and mentorship to team members. Prepare detailed documentation and reports related to migration processes. Key Skills and Qualifications: Proven experience in data migration projects, preferably in [industry/domain]. Strong knowledge of ETL tools and processes (e.g., Informatica, Talend, SSIS). Expertise in SQL and scripting languages (Python, Shell, etc.) for data manipulation. Familiarity with databases such as Oracle, SQL Server, MySQL, or others. Understanding of data quality, data governance, and data validation techniques. Excellent problem-solving and analytical skills. Strong project management and communication skills. Ability to lead a team and work collaboratively across departments. Experience with cloud platforms (AWS, Azure, GCP) is a plus. Bachelor’s degree in Computer Science, Information Technology, or related field. Skills ETL,SQL,Python,Data Migration About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 1 month ago

Apply

6.0 years

0 Lacs

India

On-site

Company Description DevoTrend IT is a global technology solutions provider leading the digitalization of private and public sectors. We deliver end-to-end digital transformation solutions and services, from ideation to deployment. Our offerings include IT & Software Consultancy Services, Resources Outsourcing Services, and Digital Transformation Consultancy, all aimed at driving innovative and productive experiences for our customers. With expertise in cloud, analytics, mobility, and various CRM/ERP platforms, we provide impactful and maintainable software solutions. Role Description This is a full-time hybrid role for a Snowflake Data Engineer and the locations are Pune, Mumbai, Chennai and Bangalore. The Snowflake Data Engineer will be responsible for designing, implementing, and managing data warehousing solutions on the Snowflake platform. Day-to-day tasks will include data modeling, building and managing ETL processes, and performing data analytics. The role requires close collaboration with cross-functional teams to ensure data integrity and optimal performance of the data infrastructure. Qualifications Build ETL (extract, transform, and loading) jobs using Fivetran and dbt for our internal projects and for customers that use various platforms like Azure, Salesforce, and AWS technologies • Monitoring active ETL jobs in production. • Build out data lineage artifacts to ensure all current and future systems are properly documented • Assist with the build out design/mapping documentation to ensure development is clear and testable for QA and UAT purposes • Assess current and future data transformation needs to recommend, develop, and train new data integration tool technologies • Discover efficiencies with shared data processes and batch schedules to help ensure no redundancy and smooth operations • Assist the Data Quality Analyst to implement checks and balances across all jobs to ensure data quality throughout the entire environment for current and future batch jobs. • Hands-on experience in developing and implementing large-scale data warehouses, Business Intelligence and MDM solutions, including Data Lakes/Data Vaults. SUPERVISORY RESPONSIBILITIES: • This job has no supervisory responsibilities. QUALIFICATIONS: • Bachelor's Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling or data engineering work • 3-5 year’s experience with a strong proficiency with SQL query/development skills • Develop ETL routines that manipulate and transfer large volumes of data and perform quality checks • Hands-on experience with ETL tools (e.g Informatica, Talend, dbt, Azure Data Factory) • Experience working in the healthcare industry with PHI/PII • Creative, lateral, and critical thinker • Excellent communicator • Well-developed interpersonal skills • Good at prioritizing tasks and time management • Ability to describe, create and implement new solutions • Experience with related or complementary open source software platforms and languages (e.g. Java, Linux, Apache, Perl/Python/PHP, Chef) • Knowledge / Hands-on experience with BI tools and reporting software (e.g. Cognos, Power BI, Tableau) • Big Data stack (e.g.Snowflake(Snowpark), SPARK, MapReduce, Hadoop, Sqoop, Pig, HBase, Hive, Flume)

Posted 1 month ago

Apply

170.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Area(s) of responsibility About US Birlasoft, a global leader at the forefront of Cloud, AI, and Digital technologies, seamlessly blends domain expertise with enterprise solutions. The company’s consultative and design-thinking approach empowers societies worldwide, enhancing the efficiency and productivity of businesses. As part of the multibillion-dollar diversified CKA Birla Group, Birlasoft with its 12,000+ professionals, is committed to continuing the Group’s 170-year heritage of building sustainable communities. Enterprise Architect Provide guidance and support to teams and analysts. Expert on Databricks and AWS, with minimum 10+ years of experience Databrick certification is a must Collaborate on data strategy with business and IT partners. Identify and address data issues. Lead architecture, design, and implementation of scalable and efficient data pipelines and analytics solutions on Databricks. Work closely with data engineering, platform, and analytics teams to integrate Databricks with AWS S3 Data Lake, Redshift, Talend, SAP BW/HANA, Oracle, and other tools. Architect solutions supporting batch and real-time data processing using Delta Lake, Spark, and ML Flow. Collaborate with business stakeholders to understand data requirements, assess current state, and provide strategic direction for future-state architecture. Ensure best practices for data quality, governance, and security are implemented. Own design, development, and delivery of pipelines and models as per the project plan. Ensure best practices are followed in performance, security, and governance. Provide project documentation and conduct KT sessions Ensure following acceptance criteria is met, by working along with current development and pool partners Successful ingestion and transformation of in-scope objects. Validated and reconciled transformed data within defined thresholds. Fully operational Snowflake warehouse with business-validated datasets. Documented operational playbooks and KT completed. No critical issues post go-live during hyper-care window

Posted 1 month ago

Apply

4.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Position: Database Developer Exp: 4 to 8 yrs Manadatory Skills: SQL, Informatica, ETL Serving NP candidates who can join in July month can apply for this Role Job Summary: We are seeking a highly skilled Database Developer with strong expertise in SQL and ETL processes to join our data engineering team. The ideal candidate will be responsible for designing, developing, and maintaining robust data pipelines and ensuring efficient data storage and access across various systems. Key Responsibilities: Develop, test, and maintain SQL queries, stored procedures, and database objects. Design and implement ETL workflows to extract, transform, and load data from multiple sources. Optimize existing database queries for performance and scalability. Collaborate with data analysts, software developers, and business stakeholders to understand data requirements. Ensure data integrity, accuracy, and consistency across systems. Monitor and troubleshoot ETL jobs and perform root cause analysis of failures. Participate in data modeling and schema design activities. Maintain technical documentation and adhere to best practices in database development. Required Skills & Qualifications: Proven experience (4+ years) as a Database Developer or in a similar role. Strong proficiency in writing complex SQL queries , procedures , and performance tuning . Hands-on experience with ETL tools (e.g., Informatica, Talend, SSIS, Apache Nifi, etc.). Solid understanding of relational database design , normalization , and data warehousing concepts. Experience with RDBMS platforms such as SQL Server , PostgreSQL , Oracle , or MySQL . Ability to analyze and interpret complex datasets and business requirements. Familiarity with data governance , data quality , and data security best practice

Posted 1 month ago

Apply

3.0 - 8.0 years

9 - 15 Lacs

Hyderabad

Work from Office

Job Title: Database Developer Location: Madhapur Industry: IT Services & Consulting Department: Engineering - Software & QA Employment Type: Full-Time Role Category: DBA / Data Warehousing Job Description: We are on the lookout for a skilled Database Developers to join our team. In this role, you will work closely with our client to enhance their product and provide essential post-go-live support for users across the US, Bangkok, Philippines, Shanghai, and Penang. If you are passionate about database development and eager to tackle complex challenges, we invite you to apply! Key Responsibilities: Develop and implement product enhancements. Provide post-go-live production support, troubleshooting issues as they arise. Write and optimize complex SQL queries using advanced SQL functions. Perform query performance tuning, optimization, and debugging. Design and maintain database triggers, indexes, and views. Manage and understand complex data organization within RDBMS environments. Required Candidate Profile: Database Experience: Proficiency in Oracle, MySQL, or MSSQL SERVER. Stored Procedures Expertise: Strong background in Stored Procedures, including writing and debugging complex queries. Query Optimization: Proven expertise in query performance tuning and optimization. Database Design: Competency in writing triggers, and creating indexes and views. Industry Experience: Experience in the manufacturing domain is a significant advantage. Educational Requirements: Undergraduate Degree: Any Graduate Postgraduate Degree: Other Post Graduate - Other Specialization Doctorate: Other Doctorate - Other Specialization Key Skills: Query Optimization MySQL SQL Queries PL/SQL Data Warehousing Performance Tuning Oracle Role: Database Developer / Engineer If you are a proactive, detail-oriented database professional with a knack for problem-solving and performance tuning, we would love to hear from you. Apply now to join our dynamic team and make a meaningful impact!

Posted 1 month ago

Apply

7.0 years

0 Lacs

Gandhinagar, Gujarat, India

On-site

Key Responsibilities • Lead and mentor a high-performing data pod composed of data engineers, data analysts, and BI developers. • Design, implement, and maintain ETL pipelines and data workflows to support real-time and batch processing. • Architect and optimize data warehouses for scale, performance, and security. • Perform advanced data analysis and modeling to extract insights and support business decisions. • Lead data science initiatives including predictive modeling, NLP, and statistical analysis. • Manage and tune relational and non-relational databases (SQL, NoSQL) for availability and performance. • Develop Power BI dashboards and reports for stakeholders across departments. • Ensure data quality, integrity, and compliance with data governance and security standards. • Work with cross-functional teams (product, marketing, ops) to turn data into strategy. Qualifications Required: • PhD in Data Science, Computer Science, Engineering, Mathematics, or related field. • 7+ years of hands-on experience across data engineering, data science, analysis, and database administration. • Strong experience with ETL tools (e.g., Airflow, Talend, SSIS) and data warehouses (e.g., Snowflake, Redshift, BigQuery). • Proficient in SQL, Python, and Power BI. • Familiarity with modern cloud data platforms (AWS/GCP/Azure). • Strong understanding of data modeling, data governance, and MLOps practices. • Exceptional ability to translate business needs into scalable data solutions.

Posted 1 month ago

Apply

0 years

0 Lacs

India

On-site

We are looking for a skilled Data Engineer to join our team and help build, manage, and optimize our data systems and pipelines. The ideal candidate is passionate about data, has a strong technical background, and thrives in a collaborative environment. You will work closely with data scientists, analysts, and business teams to ensure data flows smoothly and securely throughout the organization. Design, develop, and maintain scalable and efficient data pipelines and ETL processes. Collaborate with stakeholders to understand data needs and provide solutions. Build and maintain databases and data warehouses to support analytics and reporting. Ensure data quality, integrity, and security across all systems. Implement and optimize data storage solutions for performance and scalability. Monitor, troubleshoot, and resolve issues in data pipelines and systems. Document data workflows, processes, and best practices. Stay updated on emerging trends, tools, and technologies in data engineering. Proven experience as a Data Engineer or in a similar role. Proficiency in programming languages like Python, Java, or Scala. Strong experience with SQL and database technologies (e.g., MySQL, PostgreSQL, MongoDB). Hands-on experience with big data tools (e.g., Hadoop, Spark) and cloud platforms (e.g., AWS, Azure, Google Cloud). Familiarity with ETL tools and workflows (e.g., Apache Airflow, Talend, Informatica). Knowledge of data modeling, data warehousing, and schema design. Experience with version control systems like Git. Excellent problem-solving and analytical skills.

Posted 1 month ago

Apply

4.0 - 7.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Category: Testing/Quality Assurance Main location: India, Karnataka, Bangalore Position ID: J0525-1991 Employment Type: Full Time Position Description: Job Title: ETL Testing Position: Test Engineer Experience: 4- 7 Years Category: Software Development/ Engineering Shift: 1PM to 1PM Main location: India, Karnataka, Bangalore Position ID: J0525-1991 Employment Type: Full Time We are looking for a skilled ETL Tester with hands-on experience in SQL and Python to join our Quality Engineering team. The ideal candidate will be responsible for validating data pipelines, ensuring data quality, and supporting the end-to-end ETL testing lifecycle in a fast-paced environment. Your future duties and responsibilities: Design, develop, and execute test cases for ETL workflows and data pipelines. Perform data validation and reconciliation using advanced SQL queries. Use Python for automation of test scripts, data comparison, and validation tasks. Work closely with Data Engineers and Business Analysts to understand data transformations and business logic. Perform root cause analysis of data discrepancies and report defects in a timely manner. Validate data across source systems, staging, and target data stores (e.g., Data Lakes, Data Warehouses). Participate in Agile ceremonies, including sprint planning and daily stand-ups. Maintain test documentation including test plans, test cases, and test results. Required qualifications to be successful in this role: 5+ years of experience in ETL/Data Warehouse testing. Strong proficiency in SQL (joins, aggregations, window functions, etc.). Experience in Python scripting for test automation and data validation. Hands-on experience with tools like Informatica, Talend, Apache NiFi, or similar ETL tools. Understanding of data models, data marts, and star/snowflake schemas. Familiarity with test management and bug tracking tools (e.g., JIRA, HP ALM). Strong analytical, debugging, and problem-solving skills. Good to Have: Exposure to Big Data technologies (e.g., Hadoop, Hive, Spark). Experience with Cloud platforms (e.g., AWS, Azure, GCP) and related data services. Knowledge of CI/CD tools and automated data testing frameworks. Experience working in Agile/Scrum teams. Skills: Jira SQLite Banking ETL Python What you can expect from us: Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.

Posted 1 month ago

Apply

4.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Do you enjoy hands-on technical work? Do you enjoy being part of a team that ensures the highest quality? Join our Digital Technology-Data & Analytics team Baker Hughes' Digital Technology team provide and create tech solutions to cater to the needs of our customers. As a global team we collaborative to provide cutting-edge solutions to solve our customer's problems. We support them by providing materials management, planning, inventory and warehouse solutions. Take ownership for innovative Data Analytics projects The Data Engineering team helps solve our customers' toughest challenges; making flights safer, power cheaper, and oil & gas production safer for people and the environment by leveraging data and analytics. Data Architect will work on the projects in a technical domain as a technical leader and architect in Data & Information. Will be responsible for data handling, data warehouse building and architecture. As a Senior Data Engineer, you will be responsible for: Designing and implementing scalable and robust data pipelines to collect, process, and store data from various sources. Developing and maintaining data warehouse and ETL (Extract, Transform, Load) processes for data integration and transformation. Optimizing and tuning the performance of data systems to ensure efficient data processing and analysis. Collaborating with product managers and analysts to understand data requirements and implement solutions for data modeling and analysis. Identifying and resolving data quality issues, ensuring data accuracy, consistency, and completeness. Implement and maintain data governance and security measures to protect sensitive data. Monitoring and troubleshooting data infrastructure, perform root cause analysis, and implement necessary fixes. Ensuring the use of state-of-the-art methodologies to carry out job in the most productive and effective way. This may include research activities to identify and introduce new technologies in the field of data acquisition and data analysis Fuel your passion To be successful in this role you will: Have a Bachelors in Computer Science or “STEM” Majors (Science, Technology, Engineering and Math) with a minimum 4 years of experience Have Proven experience as a Data Engineer or similar role, working with large-scale data processing and storage systems. Have Proficiency in SQL and database management systems (e.g., MySQL, PostgreSQL, or Oracle). Have Experience with building complex jobs for building SCD type mappings using ETL tools like Talend, Informatica, etc Have Experience with data visualization and reporting tools (e.g., Tableau, Power BI). Have strong problem-solving and analytical skills, with the ability to handle complex data challenges. Excellent communication and collaboration skills to work effectively in a team environment. Have Experience in data modeling, data warehousing and ETL principles. Have Familiarity with cloud platforms like AWS, Azure, or GCP, and their data services (e.g., S3, Redshift, BigQuery). Have Advanced knowledge of distributed computing and parallel processing. Experience with real-time data processing and streaming technologies (e.g., Apache Kafka, Apache Flink). (Good to have) Have Knowledge of containerization and orchestration technologies (e.g., Docker, Kubernetes). Have certification in relevant technologies or data engineering disciplines. Work in a way that works for you We recognize that everyone is different and that the way in which people want to work and deliver at their best is different for everyone too. In this role, we can offer the following flexible working patterns: Working flexible hours - flexing the times when you work in the day to help you fit everything in and work when you are the most productive Working with us Our people are at the heart of what we do at Baker Hughes. We know we are better when all of our people are developed, engaged and able to bring their whole authentic selves to work. We invest in the health and well-being of our workforce, train and reward talent and develop leaders at all levels to bring out the best in each other. Working for you Our inventions have revolutionized energy for over a century. But to keep going forward tomorrow, we know we have to push the boundaries today. We prioritize rewarding those who embrace change with a package that reflects how much we value their input. Join us, and you can expect: Contemporary work-life balance policies and wellbeing activities Comprehensive private medical care options Safety net of life insurance and disability programs Tailored financial programs Additional elected or voluntary benefits About Us: We are an energy technology company that provides solutions to energy and industrial customers worldwide. Built on a century of experience and conducting business in over 120 countries, our innovative technologies and services are taking energy forward – making it safer, cleaner and more efficient for people and the planet. Join Us: Are you seeking an opportunity to make a real difference in a company that values innovation and progress? Join us and become part of a team of people who will challenge and inspire you! Let’s come together and take energy forward. Baker Hughes Company is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, national or ethnic origin, sex, sexual orientation, gender identity or expression, age, disability, protected veteran status or other characteristics protected by law. R149189

Posted 1 month ago

Apply

8.0 years

4 - 6 Lacs

Mumbai

On-site

Sr Devloper with special emphasis and experience of 8 to 10 years on Pyspark , Python and SQL along with ETL Tools ( Talend / Ab initio / informatica / Similar) . Also have good exposure to ETL tools to understand the flow and rewrite them into Python and Pyspark and executing the test plans.5+ years of sound knowledge on Pyspark to implement ETL logics. Strong understanding of frontend technologies such as HTML, CSS, React & JavaScript. Proficiency in data modeling and design, including PL/SQL development Creating test plans to understand current ETL flow and rewriting them to Pyspark. Providing ongoing support and maintenance for ETL applications, including troubleshooting and resolving issues. Expertise in practices like Agile, Peer reviews, Continuous Integration. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 1 month ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

We are seeking a Spark, Big Data - ETL Tech Lead for Commercial Card’s Global Data Repository development team. The successful candidate will interact with the Development Project Manager, the development, testing, and production support teams, as well as other departments within Citigroup (such as the System Administrators, Database Administrators, Data Centre Operations, and Change Control groups) for TTS platforms. He/she requires exceptional communication skills across both technology and the business and will have a high degree of visibility. The candidate will be a rigorous technical lead with a strong understanding of how to build scalable, enterprise level global applications. The ideal candidate will be dependable and resourceful software professional who can comfortably work in a large development team in a globally distributed, dynamic work environment that fosters diversity, teamwork and collaboration. The ability to work in high pressured environment is essential. Responsibilities: Lead the design and implementation of large-scale data processing pipelines using Apache Spark on BigData Hadoop Platform. Develop and optimize Spark applications for performance and scalability. Responsible for providing technical leadership of multiple large scale/complex global software solutions. Integrate data from various sources, including Couchbase, Snowflake, and HBase, ensuring data quality and consistency. Experience of developing teams of permanent employees and vendors from 5 – 15 developers in size Build and sustain strong relationships with the senior business leaders associated with the platform Design, code, test, document and implement application release projects as part of development team. Work with onsite development partners to ensure design and coding best practices. Work closely with Program Management and Quality Control teams to deliver quality software to agreed project schedules. Proactively notify Development Project Manager of risks, bottlenecks, problems, issues, and concerns. Compliance with Citi's System Development Lifecycle and Information Security requirements. Oversee development scope, budgets, time line documents Monitor, update and communicate project timelines and milestones; obtain senior management feedback; understand potential speed bumps and client’s true concerns/needs. Stay updated with the latest trends and technologies in big data and cloud computing. Mentor and guide junior developers, providing technical leadership and expertise. Key Challenges: Managing time and changing priorities in a dynamic environment Ability to provide quick turnaround to software issues and management requests Ability to assimilate key issues and concepts and come up to speed quickly Qualifications: Bachelor’s or master’s degree in computer science, Information Technology, or equivalent Minimum 10 years of Proven experience in developing and managing big data solutions using Apache Spark. Having strong hold on Spark-core, Spark-SQL & Spark Streaming Minimum 6 years of experience in leading globally distributed teams successfully. Strong programming skills in Scala, Java, or Python. Hands on experience on Technologies like Apache Hive, Apache Kafka, HBase, Couchbase, Sqoop, Flume etc. Proficiency in SQL and experience with relational (Oracle/PL-SQL) and NoSQL databases like mongoDB. Demonstrated people and technical management skills. Demonstrated excellent software development skills. Strong experiences in implementation of complex file transformations like positional, xmls. Experience in building enterprise system with focus on recovery, stability, reliability, scalability and performance. Experience in working on Kafka, JMS / MQ applications. Experience in working multiple OS (Unix, Linux, Win) Familiarity with data warehousing concepts and ETL processes. Experience in performance tuning of large technical solutions with significant volumes Knowledge of data modeling, data architecture, and data integration techniques. Knowledge on best practices for data security, privacy, and compliance. Key Competencies: Excellent organization skills, attention to detail, and ability to multi-task Demonstrated sense of responsibility and capability to deliver quickly Excellent communication skills. Clearly articulating and documenting technical and functional specifications is a key requirement. Proactive problem-solver Relationship builder and team player Negotiation, difficult conversation management and prioritization skills Flexibility to handle multiple complex projects and changing priorities Excellent verbal, written and interpersonal communication skills Good analytical and business skills Promotes teamwork and builds strong relationships within and across global teams Promotes continuous process improvement especially in code quality, testability & reliability Desirable Skills: Experience in Java, Spring, ETL Tools like Talend, Ab Initio is a plus. Experience of migrating functionality from ETL tools to Spark. Experience/knowledge on Cloud technologies AWS, GCP. Experience in Financial industry ETL Certification, Project Management Certification Experience with Commercial Cards applications and processes would be advantageous Experience with Agile methodology This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.

Posted 1 month ago

Apply

1.0 years

0 Lacs

Nagercoil, Tamil Nadu, India

On-site

Job Summary We are seeking a skilled Data Migration Specialist to support critical data transition initiatives, particularly involving Salesforce and Microsoft SQL Server . This role will be responsible for the end-to-end migration of data between systems, including data extraction, transformation, cleansing, loading, and validation. The ideal candidate will have a strong foundation in relational databases, a deep understanding of the Salesforce data model, and proven experience handling large-volume data loads. Required Skills And Qualifications 1+ years of experience in data migration, ETL, or database development roles. Strong hands-on experience with Microsoft SQL Server and T-SQL (complex queries, joins, indexing, and profiling). Proven experience using Salesforce Data Loader for bulk data operations. Solid understanding of Salesforce CRM architecture, including object relationships and schema design. Strong background in data transformation and cleansing techniques. Nice To Have Experience with large-scale data migration projects involving CRM or ERP systems. Exposure to ETL tools such as Talend, Informatica, Mulesoft, or custom scripts. Salesforce certifications (e.g., Administrator, Data Architecture & Management Designer) are a plus. Knowledge of Apex, Salesforce Flows, or other declarative tools is a bonus.

Posted 1 month ago

Apply

3.0 - 6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Wissen Technology is Hirin g fo r Power BI Developer About Wissen Technology: Wissen Technology is a globally recognized organization known for building solid technology teams, working with major financial institutions, and delivering high-quality solutions in IT services. With a strong presence in the financial industry, we provide cutting-edge solutions to address complex business challenges. Role Overview: We are seeking a skilled Power BI Developer to design and develop business intelligence solutions that turn data into actionable insights. You will collaborate with cross-functional teams to understand data requirements and build interactive dashboards, reports, and data models that support strategic decision-making. Experience : 3-6 Years Location: Bengaluru Key Responsibilities: Design, develop, and deploy Power BI reports and dashboards Connect Power BI to various data sources including SQL databases, Excel, APIs, and cloud platforms Create data models , DAX formulas , and measures for performance-optimized reports Understand business requirements and translate them into technical specs Automate report refreshes, implement row-level security, and maintain data accuracy Collaborate with stakeholders for UAT, feedback, and enhancements Troubleshoot and resolve reporting/data issues in a timely manner Required Skills: 3–6 years of hands-on experience in Power BI development Strong knowledge of DAX, Power Query (M Language), and data modeling Proficiency in writing complex SQL queries and working with RDBMS (MS SQL Server, Oracle, etc.) Experience working with Excel, CSV, and cloud-based data sources (Azure, AWS, etc.) Familiarity with data visualization best practices Strong communication and stakeholder management skills Preferred Skills Knowledge of Power Platform (PowerApps, Power Automate) Exposure to ETL tools (SSIS, Informatica, Talend) Experience with Agile/Scrum methodology Basic understanding of Python/R for data analysis is a plus Working knowledge of Azure Data Lake , Synapse Analytics , or Data Factory The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015. Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products. We offer an array of services including Core Business Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud Adoption, Mobility, Digital Adoption, Agile & DevOps, Quality Assurance & Test Automation. Over the years, Wissen Group has successfully delivered $1 billion worth of projects for more than 20 of the Fortune 500 companies. Wissen Technology provides exceptional value in mission critical projects for its clients, through thought leadership, ownership, and assured on-time deliveries that are always ‘first time right ’. The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them with the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients. We have been certified as a Great Place to Work® company for two consecutive years (2020-2022) and voted as the Top 20 AI/ML vendor by CIO Insider. Great Place to Work® Certification is recognized world over by employees and employers alike and is considered the ‘Gold Standard ’. Wissen Technology has created a Great Place to Work by excelling in all dimensions - High-Trust, High-Performance Culture, Credibility, Respect, Fairness, Pride and Camaraderie. Website : www.wissen.com LinkedIn : https ://www.linkedin.com/company/wissen-technology Wissen Leadership : https://www.wissen.com/company/leadership-team/ Wissen Live: https://www.linkedin.com/company/wissen-technology/posts/feedView=All Wissen Thought Leadership : https://www.wissen.com/articles/ Employee Speak:

Posted 1 month ago

Apply

5.0 years

0 Lacs

Delhi

On-site

The Role Context: This is an exciting opportunity to join a dynamic and growing organization, working at the forefront of technology trends and developments in social impact sector. Wadhwani Center for Government Digital Transformation (WGDT) works with the government ministries and state departments in India with a mission of “ Enabling digital transformation to enhance the impact of government policy, initiatives and programs ”. We are seeking a highly motivated and detail-oriented individual to join our team as a Data Engineer with experience in the designing, constructing, and maintaining the architecture and infrastructure necessary for data generation, storage and processing and contribute to the successful implementation of digital government policies and programs. You will play a key role in developing, robust, scalable, and efficient systems to manage large volumes of data, make it accessible for analysis and decision-making and driving innovation & optimizing operations across various government ministries and state departments in India. Key Responsibilities: a. Data Architecture Design : Design, develop, and maintain scalable data pipelines and infrastructure for ingesting, processing, storing, and analyzing large volumes of data efficiently. This involves understanding business requirements and translating them into technical solutions. b. Data Integration: Integrate data from various sources such as databases, APIs, streaming platforms, and third-party systems. Should ensure the data is collected reliably and efficiently, maintaining data quality and integrity throughout the process as per the Ministries/government data standards. c. Data Modeling: Design and implement data models to organize and structure data for efficient storage and retrieval. They use techniques such as dimensional modeling, normalization, and denormalization depending on the specific requirements of the project. d. Data Pipeline Development/ ETL (Extract, Transform, Load): Develop data pipeline/ETL processes to extract data from source systems, transform it into the desired format, and load it into the target data systems. This involves writing scripts or using ETL tools or building data pipelines to automate the process and ensure data accuracy and consistency. e. Data Quality and Governance: Implement data quality checks and data governance policies to ensure data accuracy, consistency, and compliance with regulations. Should be able to design and track data lineage, data stewardship, metadata management, building business glossary etc. f. Data lakes or Warehousing: Design and maintain data lakes and data warehouse to store and manage structured data from relational databases, semi-structured data like JSON or XML, and unstructured data such as text documents, images, and videos at any scale. Should be able to integrate with big data processing frameworks such as Apache Hadoop, Apache Spark, and Apache Flink, as well as with machine learning and data visualization tools. g. Data Security : Implement security practices, technologies, and policies designed to protect data from unauthorized access, alteration, or destruction throughout its lifecycle. It should include data access, encryption, data masking and anonymization, data loss prevention, compliance, and regulatory requirements such as DPDP, GDPR, etc. h. Database Management: Administer and optimize databases, both relational and NoSQL, to manage large volumes of data effectively. i. Data Migration: Plan and execute data migration projects to transfer data between systems while ensuring data consistency and minimal downtime. a. Performance Optimization : Optimize data pipelines and queries for performance and scalability. Identify and resolve bottlenecks, tune database configurations, and implement caching and indexing strategies to improve data processing speed and efficiency. b. Collaboration: Collaborate with data scientists, analysts, and other stakeholders to understand their data requirements and provide them with access to the necessary data resources. They also work closely with IT operations teams to deploy and maintain data infrastructure in production environments. c. Documentation and Reporting: Document their work including data models, data pipelines/ETL processes, and system configurations. Create documentation and provide training to other team members to ensure the sustainability and maintainability of data systems. d. Continuous Learning: Stay updated with the latest technologies and trends in data engineering and related fields. Should participate in training programs, attend conferences, and engage with the data engineering community to enhance their skills and knowledge. Desired Skills/ Competencies Education: A Bachelor's or Master's degree in Computer Science, Software Engineering, Data Science, or equivalent with at least 5 years of experience. Database Management: Strong expertise in working with databases, such as SQL databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cassandra). Big Data Technologies: Familiarity with big data technologies, such as Apache Hadoop, Spark, and related ecosystem components, for processing and analyzing large-scale datasets. ETL Tools: Experience with ETL tools (e.g., Apache NiFi, Talend, Apache Airflow, Talend Open Studio, Pentaho, Infosphere) for designing and orchestrating data workflows. Data Modeling and Warehousing: Knowledge of data modeling techniques and experience with data warehousing solutions (e.g., Amazon Redshift, Google BigQuery, Snowflake). Data Governance and Security: Understanding of data governance principles and best practices for ensuring data quality and security. Cloud Computing: Experience with cloud platforms (e.g., AWS, Azure, Google Cloud) and their data services for scalable and cost-effective data storage and processing. Streaming Data Processing: Familiarity with real-time data processing frameworks (e.g., Apache Kafka, Apache Flink) for handling streaming data. KPIs: Data Pipeline Efficiency: Measure the efficiency of data pipelines in terms of data processing time, throughput, and resource utilization. KPIs could include average time to process data, data ingestion rates, and pipeline latency. Data Quality Metrics: Track data quality metrics such as completeness, accuracy, consistency, and timeliness of data. KPIs could include data error rates, missing values, data duplication rates, and data validation failures. System Uptime and Availability: Monitor the uptime and availability of data infrastructure, including databases, data warehouses, and data processing systems. KPIs could include system uptime percentage, mean time between failures (MTBF), and mean time to repair (MTTR). Data Storage Efficiency: Measure the efficiency of data storage systems in terms of storage utilization, data compression rates, and data retention policies. KPIs could include storage utilization rates, data compression ratios, and data storage costs per unit. Data Security and Compliance: Track adherence to data security policies and regulatory compliance requirements such as DPDP, GDPR, HIPAA, or PCI DSS. KPIs could include security incident rates, data access permissions, and compliance audit findings. Data Processing Performance: Monitor the performance of data processing tasks such as ETL (Extract, Transform, Load) processes, data transformations, and data aggregations. KPIs could include data processing time, CPU usage, and memory consumption. Scalability and Performance Tuning: Measure the scalability and performance of data systems under varying workloads and data volumes. KPIs could include scalability benchmarks, system response times under load, and performance improvements achieved through tuning. Resource Utilization and Cost Optimization: Track resource utilization and costs associated with data infrastructure, including compute resources, storage, and network bandwidth. KPIs could include cost per data unit processed, cost per query, and cost savings achieved through optimization. Incident Response and Resolution: Monitor the response time and resolution time for data-related incidents and issues. KPIs could include incident response time, time to diagnose and resolve issues, and customer satisfaction ratings for support services. Documentation and Knowledge Sharing : Measure the quality and completeness of documentation for data infrastructure, data pipelines, and data processes. KPIs could include documentation coverage, documentation update frequency, and knowledge sharing activities such as internal training sessions or knowledge base contributions. Years of experience of the current role holder New Position Ideal years of experience 3 – 5 years Career progression for this role CTO WGDT (Head of Incubation Centre) ******************************************************************************* Wadhwani Corporate Profile: (Click on this link) Our Culture: WF is a global not-for-profit, and works like a start-up, in a fast-moving, dynamic pace where change is the only constant and flexibility is the key to success. Three mantras that we practice across job roles, levels, functions, programs and initiatives, are Quality, Speed, Scale, in that order. We are an ambitious and inclusive organization, where everyone is encouraged to contribute and ideate. We are intensely and insanely focused on driving excellence in everything we do. We want individuals with the drive for excellence, and passion to do whatever it takes to deliver world class outcomes to our beneficiaries. We set our own standards often more rigorous than what our beneficiaries demand, and we want individuals who love it this way. We have a creative and highly energetic environment – one in which we look to each other to innovate new solutions not only for our beneficiaries but for ourselves too. Open to collaborate with a borderless mentality, often going beyond the hierarchy and siloed definitions of functional KRAs, are the individuals who will thrive in our environment. This is a workplace where expertise is shared with colleagues around the globe. Individuals uncomfortable with change, constant innovation, and short learning cycles and those looking for stability and orderly working days may not find WF to be the right place for them. Finally, we want individuals who want to do greater good for the society leveraging their area of expertise, skills and experience. The foundation is an equal opportunity firm with no bias towards gender, race, colour, ethnicity, country, language, age and any other dimension that comes in the way of progress. Join us and be a part of us! Bachelors in Technology / Masters in Technology

Posted 1 month ago

Apply

6.0 years

0 Lacs

Bengaluru

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary At PwC, our people in data management focus on organising and maintaining data to enable accuracy and accessibility for effective decision-making. These individuals handle data governance, quality control, and data integration to support business operations. In data governance at PwC, you will focus on establishing and maintaining policies and procedures to optimise the quality, integrity, and security of data. You will be responsible for optimising data management processes and mitigate risks associated with data usage. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within…. A career within Data and Analytics services will provide you with the opportunity to help organizations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organizational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organizations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities: · Develop and implement data governance solutions using Informatica CDGC. · Configure and manage metadata ingestion, lineage, and data cataloging functionalities. · Collaborate with data stewards to define and enforce data governance policies and standards. · Design and implement data quality rules and metrics to monitor and improve data accuracy. · Integrate CDGC with other enterprise systems and data sources for seamless metadata management. · Work with business users to capture and maintain business glossaries and data dictionaries. · Conduct data profiling and analysis to support data governance initiatives. · Provide training and support to users on leveraging CDGC for data governance and cataloging. · Participate in solution design reviews, troubleshooting, and performance tuning. · Stay updated with the latest trends and best practices in data governance and cataloging. Mandatory skill sets: 6+ years of experience in data governance and cataloging, with at least 1 year on the Informatica CDGC platform. · Proficiency in configuring and managing Informatica CDGC components. · Ability to integrate CDGC with various data sources and enterprise system · Experience in debugging issues & applying fixes in Informatica CDGC · In depth understanding of Data Management landscape including technology landscape, standards, and best practices prevalent in data governance, metadata management, cataloging, data lineage, data quality, and data privacy. · Familiarity with data management principles and practices in DMBOK. · Experience in creating frameworks, policies and processes. · Strong experimental mindset to drive innovation amidst uncertainty and solving problems. · Strong experience in process improvements, hands-on operational management, and change management. Preferred skill sets: · Certifications in Data Governance or related fields (e.g., DAMA-CDMP, DCAM, CDMC) or any Data Governance tool certification · Experience in other Data Governance Tool like Collibra, Talend, Microsoft Purview, Atlan, Solidatus etc. · Experience in working on RFPs, internal/external POVs, Accelerators and other Years of experience required: 7-10 Education qualification: B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Technology, Master of Business Administration Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Informatica Cloud Data Governance & Catalog (CDGC) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Process Management (BPM), Coaching and Feedback, Communication, Corporate Governance, Creativity, Data Access Control, Database Administration, Data Governance Training, Data Processing, Data Processor, Data Quality, Data Quality Assessment, Data Quality Improvement Plans (DQIP), Data Stewardship, Data Stewardship Best Practices, Data Stewardship Frameworks, Data Warehouse Governance, Data Warehousing Optimization, Embracing Change, Emotional Regulation, Empathy {+ 22 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date

Posted 1 month ago

Apply

4.0 years

4 - 7 Lacs

Bengaluru

On-site

Minimum Required Experience : 4 years Full Time Skills ETL python Description ETL Developer We are seeking an experienced ETL Developer to join our dynamic team. The ideal candidate will be responsible for designing and implementing ETL processes to extract, transform, and load data from various sources, including databases, APIs, and flat files. Duties and Responsibilities Design and implement ETL processes to extract, transform, and load data from various sources. Monitor and optimize ETL processes for performance and efficiency. Document ETL processes and maintain technical specifications. Qualifications 4-8 years of experience in ETL development. Proficiency in ETL tools and frameworks such as Apache NiFi, Talend, or Informatica. Strong programming skills in Python. Experience with data warehousing concepts and methodologies. Preferred certifications in relevant ETL tools or data engineering.

Posted 1 month ago

Apply

8.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Location: Mumbai About Us StayVista is India’s largest villa hospitality brand and has redefined group getaways. Our handpicked luxury villas are present in every famous holiday destination across the country. We curate unique experiences paired with top-notch hospitality, creating unforgettable stays. Here, you will be a part of our passionate team, dedicated to crafting exceptional getaways and curating one-of-a-kind homes. We are a close-knit tribe, united by a shared love for travel and on a mission to become the most loved hospitality brand in India. Why Work With Us? At StayVista, you're part of a community where your ideas and growth matter. We’re a fast-growing team that values continuous improvement. With our skill upgrade programs, you’ll keep learning and evolving, just like we do. And hey, when you’re ready for a break, our villa discounts make it easy to enjoy the luxury you help create. Your Role As an Associate General Manager – Business Intelligence, you will lead data-driven decision-making by transforming complex datasets into strategic insights. You will optimize data pipelines, automate workflows, and integrate AI-powered solutions to enhance efficiency. Your expertise in database management, statistical analysis, and visualization will support business growth, while collaboration with leadership and cross-functional teams will drive impactful analytics strategies. About You 8+ years of experience in Business Intelligence, Revenue Management, or Data Analytics, with a strong ability to turn data into actionable insights. Bachelor’s or Master’s degree in Business Analytics, Data Science, Computer Science, or a related field. Skilled in designing, developing, and implementing end-to-end BI solutions to improve decision-making. Proficient in ETL processes using SQL, Python, and R, ensuring accurate and efficient data handling. Experienced in Google Looker Studio, Apache Superset, Power BI, and Tableau to create clear, real-time dashboards and reports. Develop, Document & Support ETL mappings, Database structures and BI reports. Develop ETL using tools such as Pentaho/Talend or as per project requirements. Participate in the UAT process and ensure quick resolution of any UAT issue or data issue. Manage different environments and be responsible for proper deployment of reports/ETLs in all client environments. Interact with Business and Product team to understand and finalize the functional requirements Responsible for timely deliverables and quality Skilled at analyzing industry trends and competitor data to develop effective pricing and revenue strategies. Demonstrated understanding of data warehouse concepts, ETL concepts, ETL loading strategy, data archiving, data reconciliation, ETL error handling, error logging mechanism, standards and best practices Cross-functional Collaboration Partner with Product, Marketing, Finance, and Operations to translate business requirements into analytical solutions. Key Metrics: what you will drive and achieve Data Driven Decision Making &Business Impact. Revenue Growth & Cost Optimization. Cross-Functional Collaboration & Leadership Impact BI & Analytics Efficiency and AI Automation Integration Our Core Values: Are you a CURATER? Curious : Here, your curiosity fuels innovation. User-Centric : You’ll anticipate the needs of all our stakeholders and exceed expectations. Resourceful : You’ll creatively optimise our resources with solutions that elevate experiences in unexpected ways. Aspire : Keep learning, keep growing—because we’re all about continuous improvement. Trust : Trust is our foundation. You’ll work in a transparent, reliable, and fair environment. Enjoy : We believe in having fun while building something extraordinary. Business Acumen: You know our services, business drivers, and industry trends inside out. You anticipate challenges in your area, weigh the impact of decisions, and track competitors to stay ahead, viewing risk as a chance to excel. Change Management: You embrace change and actively look for opportunities to improve efficiency. You navigate ambiguity well, promote innovation within the team, and take ownership of implementing fresh ideas. Leadership: You provide direction, delegate effectively, and empower your team to take ownership. You foster passion and pride in achieving goals, holding yourself accountable for the team’s successes and failures. Customer Centricity: You know your customers’ business and proactively find solutions to resolve their challenges. By building rapport and anticipating issues, you ensure smooth, win-win interactions while keeping stakeholders in the loop. Teamwork: You actively seek input from others, work across departments, and leverage team diversity to drive success. By fostering an open environment, you encourage constructive criticism and share knowledge to achieve team goals. Result Orientation: You set clear goals for yourself and your team, overcoming obstacles with a positive, solution-focused mindset. You take ownership of outcomes and make informed decisions based on cost-benefit analysis. Planning and Organizing: You analyze information systematically, prioritize tasks, and delegate effectively. You optimize processes to drive efficiency and ensure compliance with organizational standards. Communication: You communicate with confidence and professionalism, balancing talking and listening to foster open discussions. You identify key players and use the right channels to ensure clarity and gain support. StayVista is proud to be an equal opportunity employer. We do not discriminate in hiring or any employment decisions based on race, colour, religion, caste, creed, nationality, age, sex, including pregnancy, childbirth, or related medical conditions, marital status, ancestry, physical or mental disability, genetic information, veteran status, gender identity or expression, sexual orientation, or any other characteristic protected under applicable laws.

Posted 1 month ago

Apply

5.0 - 10.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

BUSINESS ANALYST / PROGRAM MANAGER No. of Positions:01 Experience Required:5 to 10 years Position Type:C2C Duration of Contract: 6 to 9 Months Working Location: Mumbai onsite Budget: Open Position Overview: We are seeking a versatile and detail-oriented Business Analyst / Program Manager to support NBFC our client in accelerating the documentation of existing Qlik Sense dashboards and Regulatory Reports. The ideal candidate will play a pivotal role in coordinating and executing documentation efforts, ensuring technical and functional clarity, and facilitating timely sign-offs. The role also includes engaging with stakeholders and presenting regular progress updates, risks, and milestones. Key Responsibilities: · Develop and maintain comprehensive technical and functional documentation for existing Qlik Sense dashboards and Regulatory Reports/dumps. · Ensure traceability of data sources, transformation logic, and business rules across platforms including Snowflake, Talend, Oracle, and Qlik. · Capture source field definitions and data lineage, acknowledging that reports may pull data from multiple systems. · Stakeholder Engagement:: Conduct walkthrough sessions with Business Analysts and end-users to validate documentation. · Gather feedback and obtain formal sign-off to ensure alignment with business requirements. · Weekly Presentations: Prepare and present updates on the progress & Highlight progress, risks, and upcoming milestones in weekly team meetings. · Experience: · 5 to 10 years of experience in Business Analysis and/or Program Management. · Proven experience in documenting Business Analysis and Intelligence reports and data platforms. · Strong knowledge of Qlik Sense, Snowflake or Talend, and Oracle · Excellent problem-solving, analytical, and communication skills. · Strong interpersonal and collaboration abilities across cross-functional teams. Educational Qualifications: Bachelor’s degree in computer science or Information Technology or Engineering or Business Administration , or a related field is required. Good-to-Have skills: Indian NBFC Context 1. Domain & Functional Understanding NBFC Lending Lifecycle Knowledge : Loan origination, underwriting, disbursement, servicing, collections Understanding RBI regulations, NBFC classification (deposit-taking, non-deposit), Fair Practices Code, KYC/AML norms. Exposure to Loan Products : Personal loans, gold loans, SME loans, vehicle finance, digital lending Credit Bureau Data Handling : Familiarity with CIBIL/CRIF reports & score interpretation Retail & SME Lending Processes : Familiarity with unsecured & secured lending, underwriting, credit scoring models. Collections & Recovery Practices : Knowledge of early-stage and late-stage collection workflows. Digital Lending Models : Insight into co-lending, BNPL (Buy Now Pay Later), DSA/DST models, and fintech partnerships. 2. Data & Analytics Skills Advanced Excel : Data cleaning, formulas, pivot tables, macros for loan and risk reports SQL (Intermediate to Advanced) : Writing efficient queries to pull customer, loan, payment, and delinquency data Data Visualization Tools : Power BI, Tableau, Qlik — useful for dashboards on collections, portfolio quality, etc. Data Profiling & Quality Checks : Detecting missing, duplicate, or inconsistent loan/customer records 3. Tools & Technologies Experience with NBFC Systems : LOS (Loan Origination System), LMS (Loan Management System), and Core NBFC Platforms like FinnOne, MyFin, BRNet, Vymo, Oracle Fusion, OGL, Kiya, Fincorp, Hotfoot (sanction), Core Banking Systems, or in-house NBFC systems. ETL Knowledge (Good to Have) : Talend, Informatica, SSIS for understanding backend data flows Python (Basic Scripting) : For EDA (exploratory data analysis) or automating reports — pandas, NumPy CRM/Collection Tools Insight : Salesforce, LeadSquared, or collection platforms like Credgenics API/Data Integration : Understanding of how NBFCs integrate with credit bureaus (CIBIL, CRIF), Aadhar, CKYC, bank statement analysers, etc. 4. Business Metrics & Reporting Understanding of NBFC KPIs : NPA %, Portfolio at Risk (PAR), Days Past Due (DPD) buckets, Collection Efficiency, Bounce Rate Regulatory Reporting Awareness : RBI-mandated MIS reports or returns (even if not the owner, knowing the data helps) 5. Compliance, Data Privacy & Risk Data Privacy Sensitivity : Understanding DPDP Act compliance for customer data handling Risk Scoring Models (Good to Have) : Working knowledge of inputs used in internal credit models 6. Project & Communication Skills Agile Tools : JIRA, Confluence for sprint planning & requirement documentation formats Strong Data Storytelling : Presenting insights and trends clearly to product, risk, or operations teams Collaboration with Data Engineering Teams : Translate business needs into data requirements, schemas, and validations Stakeholder Communication : Ability to work with risk, compliance, IT, operations, and business heads. Change Management Readiness : Supporting adoption of new systems/processes. Presentation & Reporting : Converting findings into clear, impactful reports or dashboards. Bonus Skills (Niche but Valuable) Working with UPI/NACH/Account Aggregator datasets Knowledge of data lakes or cloud-based analytics stacks (e.g. Snowflake, AWS Redshift) Hands-on with A/B testing or loan decisioning analytics Familiarity with AI/ML usage in loan decisioning .

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies