Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Ciena is committed to our people-first philosophy. Our teams enjoy a culture focused on prioritizing a personalized and flexible work environment that empowers an individual’s passions, growth, wellbeing and belonging. We’re a technology company that leads with our humanity—driving our business priorities alongside meaningful social, community, and societal impact. How You Will Contribute As the CISO & Executive Metrics and Reporting Analyst , you will report directly to the Chief Information Security Officer (CISO) and play a pivotal role in shaping and communicating the security posture of the organization. You will be responsible for developing and managing a comprehensive security metrics and reporting framework that supports executive decision-making and regulatory compliance. Key Responsibilities Define, track, and analyze key performance and risk indicators (KPIs/KRIs) aligned with security goals and frameworks (e.g., NIST, ISO 27001). Deliver regular and ad-hoc executive-level reports and dashboards that translate complex security data into actionable insights. Collect and analyze data from SIEM systems, security tools, and incident reports to support risk management and strategic planning. Collaborate with IT, compliance, and business units to align on metrics and reporting requirements. Continuously improve reporting processes and stay current with cybersecurity trends and best practices. The Must Haves Education: Bachelor’s degree in Computer Science, Information Systems, Cybersecurity, or a related field. A Master’s degree is a plus. Experience: Minimum 5 years in cybersecurity metrics and reporting, preferably in an executive-facing role. Experience with data visualization tools (e.g., Power BI, Tableau, Excel). Familiarity with SIEM systems (e.g., Splunk) and cybersecurity frameworks (e.g., NIST, ISO 27001). Proficiency in SQL and experience with Snowflake for data warehousing.: Strong analytical skills with the ability to interpret complex data sets. Experience with ETL processes and Python scripting is a plus. Excellent written and verbal communication skills, with the ability to present to non-technical stakeholders. Assets Relevant certifications such as CISSP, CISM, or CRISC. Experience working in cross-functional teams and influencing stakeholders. Strategic thinking and adaptability to evolving security threats and technologies. Strong attention to detail and a proactive approach to problem-solving. Passion for continuous improvement and innovation in cybersecurity reporting. Not ready to apply? Join our Talent Community to get relevant job alerts straight to your inbox. At Ciena, we are committed to building and fostering an environment in which our employees feel respected, valued, and heard. Ciena values the diversity of its workforce and respects its employees as individuals. We do not tolerate any form of discrimination. Ciena is an Equal Opportunity Employer, including disability and protected veteran status. If contacted in relation to a job opportunity, please advise Ciena of any accommodation measures you may require. Show more Show less
Posted 1 day ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role: Senior Databricks Engineer / Databricks Technical Lead/ Data Architect Location: Bangalore, Chennai, Delhi, Pune, Kolkata Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Mandatory Skills: Python/ PySpark / Spark with Azure/ AWS Databricks Skills: neo4j,pig,mongodb,pl/sql,architect,terraform,hadoop,pyspark,impala,apache kafka,adfs,etl,data warehouse,spark,azure,data bricks,databricks,rdbms,cassandra,aws,unix shell scripting,circleci,python,azure synapse,hive,git,kinesis,sql Show more Show less
Posted 1 day ago
3.0 years
0 Lacs
Greater Kolkata Area
On-site
Role: Senior Databricks Engineer / Databricks Technical Lead/ Data Architect Location: Bangalore, Chennai, Delhi, Pune, Kolkata Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Mandatory Skills: Python/ PySpark / Spark with Azure/ AWS Databricks Skills: neo4j,pig,mongodb,pl/sql,architect,terraform,hadoop,pyspark,impala,apache kafka,adfs,etl,data warehouse,spark,azure,data bricks,databricks,rdbms,cassandra,aws,unix shell scripting,circleci,python,azure synapse,hive,git,kinesis,sql Show more Show less
Posted 1 day ago
7.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Role: GCP Database Migration Lead Required Technical Skill Set: GCP Database Migration Lead (Non-Oracle) Desired Experience Range: 8-10 yrs Location of Requirement: Kolkata/Delhi Notice period: Immediately Job Description: 7+ years of experience in database engineering or administration 3+ years of experience leading cloud-based database migrations , preferably to GCP Deep knowledge of traditional RDBMS (MS SQL Server, MariaDB, Oracle, MySQL) Strong hands-on experience with GCP database offerings (Cloud SQL, Spanner, Big Query, Fire store, etc.) Experience with schema conversion tools and strategies (e.g., DMS, SCT, custom ETL ) Solid SQL expertise and experience with data profiling, transformation, and validation Familiarity with IaC tools like Terraform and integration with CI/CD pipelines Strong problem-solving and communication skills. Show more Show less
Posted 1 day ago
15.0 years
0 Lacs
Greater Hyderabad Area
On-site
HCL Job Level : DGM - Data Management (Centre of Excellence) Domain : Multi Tower Role : Center of Excellence (Data Management) Role Location : Hyderabad , (Noida or Chennai secondary location). Positions : 1 Experience : 15+ years Job Profile Support Global Shared Services Strategy for Multi Tower Finance (P2P, O2C, R2R and FP&A) and Procurement tracks. Understand all processes in a detailed manner, inter-dependence, current technology landscape and organization structure Ensure end-to-end data lifecycle management including ingestion, transformation, storage, and consumption, while maintaining data reliability, accuracy, and availability across enterprise systems, with a strong focus on the Enterprise Data Platform (EDP) as the central data repository Collaborate with cross-functional teams to understand data requirements, identify gaps, and implement scalable solutions Define and enforce data quality standards, validation rules, and monitoring mechanisms, while leading the architecture and deployment of scalable, fault-tolerant, and high-performance data pipelines to ensure consistent and trustworthy data delivery Partner with IT and business teams to define and implement data access controls, ensuring compliance with data privacy and security regulations (e.g., GDPR, HIPAA Understand Governance and Interaction models with Client SMEs and drive discussions on project deliverables. Collaborate with business stakeholders to define data SLAs (Service Level Agreements) and ensure adherence through proactive monitoring and alerting Act as a bridge between business and IT, translating business needs into technical solutions and ensuring alignment with strategic goals Establish and maintain metadata management practices, including data lineage, cataloging, and business glossary development Propose feasible solutions, both interim and long term, to resolve the problem statements and address key priorities. Solutioning must be at a strategic level and at L2/ L3 Level Drive Alignment of processes, people, technology & best practices thereby enabling optimization, breaking silos, eliminating redundant methods and standardizing processes and Controls across entire engagement, on Data management. Identify process variations across regions and businesses and evaluate standardization opportunities through defining the Golden processes of Data collection and Data management. Required Profile/ Experience Deep understanding of all Finance towers and Procurement Strong understanding of data management principles, data architecture, and data governance Understanding and Hands-on experience with data integration tools, ETL/ELT processes, and cloud-based data platforms Demonstrate a proven track record in managing tool integrations and ensuring accurate, high-performance data flow, with strong expertise in data quality frameworks, monitoring tools, performance optimization techniques, and a solid foundation in data modeling, metadata management, and master data management (MDM) concepts Leadership Capability – should have relevant leadership experience in running large delivery operations and driving multiple enterprise level initiatives and Programs with High Business Impact. BPO Experience : Desired candidates should have relevant experience in BPO services especially in Americas. Transformation: Should have led and delivered at least 2-3 Data transformation Project regarding Application Integrations & Master Data management Tools and Industry Benchmarks – Should have knowledge of Industry wide trends on F&A Tools, platforms and benchmarks. (Azure Data Lake, AWS, GCP) Customer Facing skills: Should be proficient in leading meetings and presentations with customers using powerful product level material. Education Requirement B.E./B. Tech/MCA or equivalent in Computer Science, Information Systems, or related field Certifications in data management tools or platforms (e.g., Informatica, Talend, Azure Data Engineer, etc.) are preferred Show more Show less
Posted 1 day ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Us Location - Hyderabad, India Department - Product R&D Level - Professional Working Pattern - Work from office. Benefits - Benefits At Ideagen DEI - DEI strategy Salary - this will be discussed at the next stage of the process, if you do have any questions, please feel free to reach out! We are seeking an experienced Data Engineer who is having strong problem solving and analytical skills, high attention to detail, passion for analytics, real-time data, and monitoring and critical Thinking and collaboration skills. The candidate should be a self-starter and a quick learner, ready to learn new technologies and tools that the job demands. Responsibilities Building automated pipelines and solutions for data migration/data import or other operations requiring data ETL. Performing analysis on core products to support migration planning and development. Working closely with the Team Lead and collaborating with other stakeholders to gather requirements and build well architected data solutions. Produce supporting documentation, such as specifications, data models, relation between data and others, required for the effective development, usage and communication of the data operations solutions with different stakeholders. Competencies, Characteristics And Traits Mandatory Skills - Minimum 3 years of Experience with SnapLogic pipeline development and building a minimum of 2 years in ETL/ELT Pipelines. Experience working with databases on-premises and/or cloud-based environments such as MSSQL, MySQL, PostgreSQL, AzureSQL, Aurora MySQL & PostgreSQL, AWS RDS etc. Experience working with API sources and destinations. Essential Skills and Experience Strong experience working with databases on-premises and/or cloud-based environments such as MSSQL, MySQL, PostgreSQL, AzureSQL, Aurora MySQL & PostgreSQL, AWS RDS etc Strong knowledge of databases, data modeling and data life cycle Proficient in understanding data and writing complex SQL Mandatory Skills - Minimum 3 years of Experience with SnapLogic pipeline development and building a minimum 2 years in ETL/ELT Pipelines Experience working with REST API in data pipelines Strong problem solving and high attention to detail Passion for analytics, real-time data, and monitoring Critical Thinking, good communication and collaboration skills Focus on high performance and quality delivery Highly self-motivated and continuous learner Desirable Experience working with no-SQL databases like MongoDB Experience with Snaplogic administration is preferable Experience working with Microsoft Power Platform (PowerAutomate and PowerApps) or any similar automation / RPA tool Experience with cloud data platforms like snowflake, data bricks, AWS, Azure etc Awareness of emerging ETL and Cloud concepts such as Amazon AWS or Microsoft Azure Experience working with Scripting languages, such as Python, R, JavaScript, etc. About Ideagen Ideagen is the invisible force behind many things we rely on every day - from keeping airplanes soaring in the sky, to ensuring the food on our tables is safe, to helping doctors and nurses care for the sick. So, when you think of Ideagen, think of it as the silent teammate that's always working behind the scenes to help those people who make our lives safer and better. Everyday millions of people are kept safe using Ideagen software. We have offices all over the world including America, Australia, Malaysia and India with people doing lots of different and exciting jobs. What is next? If your application meets the requirements for this role, our Talent Acquisition team will be in touch to guide you through the next steps. To ensure a flexible and inclusive process, please let us know if you require any reasonable adjustments by contacting us at recruitment@ideagen.com. All matters will be treated with strict confidence. At Ideagen, we value the importance of work-life balance and welcome candidates seeking flexible or part-time working arrangements. If this is something you are interested in, please let us know during the application process. Enhance your career and make the world a safer place! Show more Show less
Posted 1 day ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Technical Project Manager Location: Hyderabad Employment Type: Full-time Experience: 10+ years Domain: Banking and Insurance We are seeking a Technical Project Manager to lead and coordinate the delivery of data-centric projects. This role bridges the gap between engineering teams and business stakeholders, ensuring the successful execution of technical initiatives, particularly in data infrastructure, pipelines, analytics, and platform integration. Responsibilities: Lead end-to-end project management for data-driven initiatives, including planning, execution, delivery, and stakeholder communication. Work closely with data engineers, analysts, and software developers to ensure technical accuracy and timely delivery of projects. Translate business requirements into technical specifications and work plans. Manage project timelines, risks, resources, and dependencies using Agile, Scrum, or Kanban methodologies. Drive the development and maintenance of scalable ETL pipelines, data models, and data integration workflows. Oversee code reviews and ensure adherence to data engineering best practices. Provide hands-on support when necessary, in Python-based development or debugging. Collaborate with cross-functional teams including Product, Data Science, DevOps, and QA. Track project metrics and prepare progress reports for stakeholders. Requirements Required Qualifications: Bachelor’s or master’s degree in computer science, Information Systems, Engineering, or related field. 10+ years of experience in project management or technical leadership roles. Strong understanding of modern data architectures (e.g., data lakes, warehousing, streaming). Experience working with cloud platforms like AWS, GCP, or Azure. Familiarity with tools such as JIRA, Confluence, Git, and CI/CD pipelines. Strong communication and stakeholder management skills. Benefits Company standard benefits. Show more Show less
Posted 1 day ago
6.0 years
0 Lacs
India
Remote
Job Title: Azure Data Engineer (6 Years Experience) Location: Remote Employment Type: Full-time Experience Required: 6 years Job Summary: We are seeking an experienced Azure Data Engineer to join our data team. The ideal candidate will have strong expertise in designing and implementing scalable data solutions on Microsoft Azure, with a solid foundation in data integration, data warehousing, and ETL processes. Key Responsibilities: Design, build, and manage scalable data pipelines and data integration solutions in Azure Develop and optimize data lake and data warehouse solutions using Azure Data Lake, Azure Synapse, and Azure SQL Create ETL/ELT processes using Azure Data Factory Implement data modeling and data architecture best practices Collaborate with data analysts, data scientists, and business stakeholders to deliver reliable and efficient data solutions Monitor and troubleshoot data pipelines for performance and reliability Ensure data security, privacy, and compliance with organizational standards Automate data workflows and optimize performance using cloud-native tools Required Skills: 6 years of experience in data engineering roles Strong experience with Azure Data Factory, Azure Synapse Analytics, Azure Data Lake, and Azure SQL Proficiency in SQL, T-SQL, and scripting languages (Python or PowerShell) Hands-on experience with data ingestion, transformation, and orchestration Good understanding of data modeling (dimensional, star/snowflake schema) Experience with version control (Git) and CI/CD in data projects Familiarity with Azure DevOps and monitoring tools Preferred Skills: Experience with Databricks or Spark on Azure Knowledge of data governance and metadata management tools Understanding of big data technologies and architecture Microsoft Certified: Azure Data Engineer Associate (preferred) Show more Show less
Posted 1 day ago
7.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
Company Description Renesas is one of the top global semiconductor companies in the world. We strive to develop a safer, healthier, greener, and smarter world, and our goal is to make every endpoint intelligent by offering product solutions in the automotive, industrial, infrastructure and IoT markets. Our robust product portfolio includes world leading MCUs, SoCs, Analog and power products, plus Winning Combination solutions that curate these complementary products. We are a key supplier to the world’s leading manufacturers of electronics you rely on every day; you may not see our products, but they are all around you. Renesas employs roughly 21,000 people in more than 30 countries worldwide. As a global team, our employees actively embody the Renesas Culture, our guiding principles based on five key elements: Transparent, Agile, Global, Innovative, and Entrepreneurial. Renesas believes in, and has a commitment to, diversity and inclusion, with initiatives and a leadership team dedicated to its resources and values. At Renesas, we want to build a sustainable future where technology helps make our lives easier. Join us and build your future by being part of what’s next in electronics and the world. Job Description (Data base analyst)- Sr Sales Operations Analyst Job Summary Looking for database analysts with 7 years’ experience to grow the sales analytical and reporting team. The Analyst will work in the Global Sales- Centralized Analytics and Reporting Ops team to provide on-going support of our business intelligence tools and applications. The Database specialist will be focused on the backend database development in Databricks, Oracle, and SQL Server. The candidate must be able to develop/modify notebooks, procedures, packages, and functions in the database environment. Should be able to create jobs in Databricks. Knowledge of Python is desired. Very strong skills in SQL, analytical queries, procedural processing. Must have strong knowledge of ETL skills and transfer of data between multiple systems. Must be able to independently handle ad hoc user data requests and handle production issues in the data warehouse and reporting environment Good knowledge of Excel preferred. Knowledge of PBI and DAX language preferred. The candidate will focus on designing effective reporting solutions that are scalable, repeatable, meeting the needs of the business users. Develop pipeline for data integration and aggregation; maintain documentation; and accommodating ad-hoc user requests. This role will align with cross-functional groups such as IT, Regional Distribution Team, Regional Sales Ops, Business Units, and Finance. Qualifications Responsibilities: Proficient in relational databases (Databricks, SQL Server, Oracle) Proficient in SQL and ability to modify procedures, notebooks in Databricks, Oracle, SQL Server Proficient in advanced Excel features Ability to debug Power BI dashboards and modifying existing Power BI dashboards. Performing ad-hoc reporting to support the business and help in data-driven decision making. Excellent problem-solving abilities and communication skills Must be willing to work independently and be an excellent team player. Must be willing to support systems after regular work hours. Additional Information Renesas is an embedded semiconductor solution provider driven by its Purpose ‘ To Make Our Lives Easier .’ As the industry’s leading expert in embedded processing with unmatched quality and system-level know-how, we have evolved to provide scalable and comprehensive semiconductor solutions for automotive, industrial, infrastructure, and IoT industries based on the broadest product portfolio, including High Performance Computing, Embedded Processing, Analog & Connectivity, and Power. With a diverse team of over 21,000 professionals in more than 30 countries, we continue to expand our boundaries to offer enhanced user experiences through digitalization and usher into a new era of innovation. We design and develop sustainable, power-efficient solutions today that help people and communities thrive tomorrow, ‘ To Make Our Lives Easier .’ At Renesas, You Can Launch and advance your career in technical and business roles across four Product Groups and various corporate functions. You will have the opportunities to explore our hardware and software capabilities and try new things. Make a real impact by developing innovative products and solutions to meet our global customers' evolving needs and help make people’s lives easier, safe and secure. Maximize your performance and wellbeing in our flexible and inclusive work environment. Our people-first culture and global support system, including the remote work option and Employee Resource Groups, will help you excel from the first day. Are you ready to own your success and make your mark? Join Renesas. Let’s Shape the Future together. Renesas Electronics is an equal opportunity and affirmative action employer, committed to supporting diversity and fostering a work environment free of discrimination on the basis of sex, race, religion, national origin, gender, gender identity, gender expression, age, sexual orientation, military status, veteran status, or any other basis protected by law. For more information, please read our Diversity & Inclusion Statement. Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Ahmedabad, Gujarat
On-site
Job Title: Dot Net Developer Location: Gujarat Experience Required: Minimum 5 years post-qualification Employment Type: Full-Time Department: IT / Software Development Key Responsibilities: Design, develop, and maintain scalable and secure Client-Server and distributed web applications using Microsoft .NET technologies. Collaborate with cross-functional teams (analysts, testers, developers) to implement project requirements. Ensure adherence to architectural and coding standards and apply best practices in .NET stack development. Integrate applications with third-party libraries and RESTful APIs for seamless data sharing. Develop and manage robust SQL queries, stored procedures, views, and functions using MS SQL Server. Implement SQL Server features such as replication techniques, Always ON, and database replication. Develop and manage ETL workflows, SSIS packages, and SSRS reports. (Preferred) Develop OLAP solutions for advanced data analytics. Participate in debugging and troubleshooting complex issues to deliver stable software solutions. Support IT application deployment and ensure smooth post-implementation functioning. Take ownership of assigned tasks and respond to changing project needs and timelines. Quickly adapt and learn new tools, frameworks, and technologies as required. Technical Skills Required: .NET Framework (4.0/3.5/2.0), C#, ASP.NET, MVC Bootstrap, jQuery, HTML/CSS Multi-layered architecture design Experience with RESTful APIs and third-party integrations MS SQL Server – Advanced SQL, Replication, SSIS, SSRS Exposure to ETL and OLAP (added advantage) Soft Skills: Excellent problem-solving and debugging abilities Strong team collaboration and communication skills Ability to work under pressure and meet deadlines Proactive learner with a willingness to adopt new technologies Job Types: Full-time, Permanent Pay: ₹60,000.00 - ₹90,000.00 per month Benefits: Flexible schedule Provident Fund Location Type: In-person Schedule: Fixed shift Experience: .NET: 5 years (Required) Location: Ahmedabad, Gujarat (Required) Work Location: In person Speak with the employer +91 7888499500
Posted 1 day ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Role: Senior Databricks Engineer / Databricks Technical Lead/ Data Architect Location: Bangalore, Chennai, Delhi, Pune, Kolkata Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Mandatory Skills: Python/ PySpark / Spark with Azure/ AWS Databricks Skills: neo4j,pig,mongodb,pl/sql,architect,terraform,hadoop,pyspark,impala,apache kafka,adfs,etl,data warehouse,spark,azure,data bricks,databricks,rdbms,cassandra,aws,unix shell scripting,circleci,python,azure synapse,hive,git,kinesis,sql Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are looking for SSIS Developer having experience in maintaining ETL solutions using Microsoft SQL Server Integration Services (SSIS) . The candidate should have extensive hands-on experience in data migration , data transformation , and integration workflows between multiple systems, including preferred exposure to Oracle Cloud Infrastructure (OCI) .. No. of Resources Required: 2 (1 resource with 5+ years exp and 1 resource with 3+ years exp). Please find the below JD for data migration role requirement. Job Description: We are looking for a highly skilled and experienced Senior SSIS Developer to design, develop, deploy, and maintain ETL solutions using Microsoft SQL Server Integration Services (SSIS) . The candidate should have extensive hands-on experience in data migration , data transformation , and integration workflows between multiple systems, including preferred exposure to Oracle Cloud Infrastructure (OCI) . Job Location : Corporate Office, Gurgaon Key Responsibilities: Design, develop, and maintain complex SSIS packages for ETL processes across different environments. Perform end-to-end data migration from legacy systems to modern platforms, ensuring data quality, integrity, and performance. Work closely with business analysts and data architects to understand data integration requirements. Optimize ETL workflows for performance and reliability, including incremental loads, batch processing, and error handling. Schedule and automate SSIS packages using SQL Server Agent or other tools. Conduct root cause analysis and provide solutions for data-related issues in production systems. Develop and maintain technical documentation, including data mapping, transformation logic, and process flow diagrams. Support integration of data between on-premises systems and Oracle Cloud (OCI) using SSIS and/or other middleware tools. Participate in code reviews, unit testing, and deployment support. Education: Bachelor’s degree in Computer Science, Information Technology, or related field (or equivalent practical experience). Required Skills: 3-7 years of hands-on experience in developing SSIS packages for complex ETL workflows . Strong SQL/T-SQL skills for querying, data manipulation, and performance tuning. Solid understanding of data migration principles , including historical data load, data validation, and reconciliation techniques. Experience in working with various source/target systems like flat files, Excel, Oracle, DB2, SQL Server, etc. Good knowledge of job scheduling and automation techniques. Preferred Skills: Exposure or working experience with Oracle Cloud Infrastructure (OCI) – especially in data transfer, integration, and schema migration. Familiarity with on-premises-to-cloud and cloud-to-cloud data integration patterns. Knowledge of Azure Data Factory, Informatica, or other ETL tools is a plus. Experience in .NET or C# for custom script components in SSIS is advantageous. Understanding of data warehousing and data lake concepts. If interested, Kindly revert back with resume along and below mentioned details to amit.ranjan@binarysemantics.com Total Experience.: Years of Experience in SSIS Development: Years of Experience in maintaining ETL Solution using SSIS: Years of Experience in Data Migration / Data transformation, and integration workflows between multiple systems: Years of Experience in Oracle Cloud Infrastructure (OCI) Current Location: Home town: Reason of change: Minimum Joining Time: Regards, Amit Ranjan Show more Show less
Posted 1 day ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Role: Senior Databricks Engineer / Databricks Technical Lead/ Data Architect Location: Bangalore, Chennai, Delhi, Pune, Kolkata Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Mandatory Skills: Python/ PySpark / Spark with Azure/ AWS Databricks Skills: neo4j,pig,mongodb,pl/sql,architect,terraform,hadoop,pyspark,impala,apache kafka,adfs,etl,data warehouse,spark,azure,data bricks,databricks,rdbms,cassandra,aws,unix shell scripting,circleci,python,azure synapse,hive,git,kinesis,sql Show more Show less
Posted 1 day ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Jole Overview We are seeking a highly skilled and motivated Data Engineer || Gurgaon with 2+ years of experience. If you're passionate about coding, problem-solving, and innovation, we'd love to hear from you! About Us CodeVyasa is a mid-sized product engineering company that works with top-tier product/solutions companies such as McKinsey, Walmart, RazorPay, Swiggy , and others. We are about 550+ people strong and we cater to Product & Data Engineering use-cases around Agentic AI, RPA, Full-stack and various other GenAI areas. Key Responsibilities: Design, build, and manage scalable data pipelines using Azure Data Factory and PySpark . Lead data warehousing and lakehouse architecture initiatives to support advanced analytics and BI use cases. Collaborate with stakeholders to understand data requirements and translate them into technical solutions. Build and maintain insightful dashboards and reports using Power BI . Mentor junior team members and provide technical leadership across data projects. Ensure best practices in data governance, quality, and security. Must-Have Skills: 2–7 years of experience in data engineering and analytics. Strong hands-on experience with Azure Data Factory , PySpark , and Power BI . Deep understanding of Data Warehousing concepts and Data Lakehouse architecture. Proficient in data modeling, ETL/ELT processes, and performance tuning. Strong problem-solving and communication skills. Why Join CodeVyasa? Work on innovative, high-impact projects with a team of top-tier professionals. Continuous learning opportunities and professional growth. Flexible work environment with a supportive company culture. Competitive salary and comprehensive benefits package. Free healthcare coverage. Here's a glimpse of what life at CodeVyasa looks like Life at CodeVyasa. Show more Show less
Posted 1 day ago
2.0 - 5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Be a part of India’s largest and most admired news network! Network18 is India's most diversified Media Company in the fast growing Media market. The Company has a strong Heritage and we possess a strong presence in Magazines, Television and Internet domains. Our brands like CNBC, Forbes and Moneycontrol are market leaders in their respective segments. The Company has over 7,000 employees across all major cities in India and has been consistently managed to stay ahead of the growth curve of the industry. Network 18 brings together employees from varied backgrounds under one roof united by the hunger to create immersive content and ideas. We take pride in our people, who we believe are the key to realizing the organization’s potential. We continually strive to enable our employees to realize their own goals, by providing opportunities to learn, share and grow. Role Overview: We are seeking a passionate and skilled Data Scientist with over a year of experience to join our dynamic team. You will be instrumental in developing and deploying machine learning models, building robust data pipelines, and translating complex data into actionable insights. This role offers the opportunity to work on cutting-edge projects involving NLP, Generative AI, data automation, and cloud technologies to drive business value. Key Responsibilities: Design, develop, and deploy machine learning models, with a strong focus on NLP (including advanced techniques and Generative AI) and other AI applications. Build, maintain, and optimize ETL pipelines for automated data ingestion, transformation, and standardization from various sources Work extensively with SQL for data extraction, manipulation, and analysis in environments like BigQuery. Develop solutions using Python and relevant data science/ML libraries (Pandas, NumPy, Hugging Face Transformers, etc.). Utilize Google Cloud Platform (GCP) services for data storage, processing, and model deployment. Create and maintain interactive dashboards and reporting tools (e.g., Power BI) to present insights to stakeholders. Apply basic Docker concepts for containerization and deployment of applications. Collaborate with cross-functional teams to understand business requirements and deliver data-driven solutions. Stay abreast of the latest advancements in AI/ML and NLP best practices. Required Qualifications & Skills: 2 to 5 years of hands-on experience as a Data Scientist or in a similar role. Solid understanding of machine learning fundamentals, algorithms, and best practices. Proficiency in Python and relevant data science libraries. Good SQL skills for complex querying and data manipulation. Demonstrable experience with Natural Language Processing (NLP) techniques, including advanced models (e.g., transformers) and familiarity with Generative AI concepts and applications. Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Preferred Qualifications & Skills: Familiarity and hands-on experience with Google Cloud Platform (GCP) services, especially BigQuery, Cloud Functions, and Vertex AI. Basic understanding of Docker and containerization for deploying applications. Experience with dashboarding tools like Power BI and building web applications with Streamlit. Experience with web scraping tools and techniques (e.g., BeautifulSoup, Scrapy, Selenium). Knowledge of data warehousing concepts and schema design. Experience in designing and building ETL pipelines. Disclaimer: Please note Network18 and related group companies do not use the services of vendors or agents for recruitment. Please beware of such agents or vendors providing assistance. Network18 will not be responsible for any losses incurred. “We correspond only from our official email address” Show more Show less
Posted 1 day ago
2.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Greeting from Infosys BPM Ltd., Exclusive Women's Walkin drive We are hiring for Content and Technical writer, ETL DB Testing, ETL Testing Automation, .NET, Python Developer skills. Please walk-in for interview on 20th June 2025 at Chennai location Note: Please carry copy of this email to the venue and make sure you register your application before attending the walk-in. Please use below link to apply and register your application. Please mention Candidate ID on top of the Resume *** https://career.infosys.com/jobdesc?jobReferenceCode=PROGEN-HRODIRECT-215140 Interview details Interview Date: 20th June 2025 Interview Time: 10 AM till 1 PM Interview Venue:TP 1/1, Central Avenue Techno Park, SEZ, Mahindra World City, Paranur, TamilNadu Please find below Job Description for your reference: Work from Office*** Rotational Shifts Min 2 years of experience on project is mandate*** Job Description: Content and Technical writer Develop high-quality technical documents, including user manuals, guides, and release notes. Collaborate with cross-functional teams to gather requirements and create accurate documentation. Conduct functional testing and manual testing to ensure compliance with FDA regulations. Ensure adherence to ISO standards and maintain a clean, organized document management system. Strong understanding of Infra domain Technical writer that can convert complex technical concepts into easy to consume documents for the targeted audience. In addition, will also be a mentor to the team with technical writing. Job Description: ETL DB Testing Strong experience in ETL testing, data warehousing, and business intelligence. Strong proficiency in SQL. Experience with ETL tools (e.g., Informatica, Talend, AWS Glue, Azure Data Factory). Solid understanding of Data Warehousing concepts, Database Systems and Quality Assurance. Experience with test planning, test case development, and test execution. Experience writing complex SQL Queries and using SQL tools is a must, exposure to various data analytical functions. Familiarity with defect tracking tools (e.g., Jira). Experience with cloud platforms like AWS, Azure, or GCP is a plus. Experience with Python or other scripting languages for test automation is a plus. Experience with data quality tools is a plus. Experience in testing of large datasets. Experience in agile development is must Understanding of Oracle Database and UNIX/VMC systems is a must Job Description: ETL Testing Automation Strong experience in ETL testing and automation. Strong proficiency in SQL and experience with relational databases (e.g., Oracle, MySQL, PostgreSQL, SQL Server). Experience with ETL tools and technologies (e.g., Informatica, Talend, DataStage, Apache Spark). Hands-on experience in developing and maintaining test automation frameworks. Proficiency in at least one programming language (e.g., Python, Java). Experience with test automation tools (e.g., Selenium, PyTest, JUnit). Strong understanding of data warehousing concepts and methodologies. Experience with CI/CD pipelines and version control systems (e.g., Git). Experience with cloud-based data warehouses like Snowflake, Redshift, BigQuery is a plus. Experience with data quality tools is a plus. Job Description: .Net Should have worked on .Net development/implementation/Support project Must have experience in .NET, ASP.NET MVC, C#, WPF, WCF, SQL Server, Azure Must have experience in Web services, Web API, REST services, HTML, CSS3 Understand Architecture Requirements and ensure effective Design, Development, Validation and Support activities. REGISTRATION PROCESS: The Candidate ID & SHL Test(AMCAT ID) is mandatory to attend the interview. Please follow the below instructions to successfully complete the registration. (Talents without registration & assessment will not be allowed for the Interview). Candidate ID Registration process: STEP 1: Visit: https://career.infosys.com/joblist STEP 2: Click on "Register" and provide the required details and submit. STEP 3: Once submitted, Your Candidate ID(100XXXXXXXX) will be generated. STEP 4: The candidate ID will be shared to the registered Email ID. SHL Test(AMCAT ID) Registration process: This assessment is proctored, and talent gets evaluated on Basic analytics, English Comprehension and writex (email writing). STEP 1: Visit: https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fautologin-talentcentral.shl.com%2F%3Flink%3Dhttps%3A%2F%2Famcatglobal.aspiringminds.com%2F%3Fdata%3DJTdCJTIybG9naW4lMjIlM0ElN0IlMjJsYW5ndWFnZSUyMiUzQSUyMmVuLVVTJTIyJTJDJTIyaXNBdXRvbG9naW4lMjIlM0ExJTJDJTIycGFydG5lcklkJTIyJTNBJTIyNDE4MjQlMjIlMkMlMjJhdXRoa2V5JTIyJTNBJTIyWm1abFpUazFPV1JsTnpJeU1HVTFObU5qWWpRNU5HWTFOVEU1Wm1JeE16TSUzRCUyMiUyQyUyMnVzZXJuYW1lJTIyJTNBJTIydXNlcm5hbWVfc3E5QmgxSWI5NEVmQkkzN2UlMjIlMkMlMjJwYXNzd29yZCUyMiUzQSUyMnBhc3N3b3JkJTIyJTJDJTIycmV0dXJuVXJsJTIyJTNBJTIyJTIyJTdEJTJDJTIycmVnaW9uJTIyJTNBJTIyVVMlMjIlN0Q%3D%26apn%3Dcom.shl.talentcentral%26ibi%3Dcom.shl.talentcentral%26isi%3D1551117793%26efr%3D1&data=05%7C02%7Comar.muqtar%40infosys.com%7Ca7ffe71a4fe4404f3dac08dca01c0bb3%7C63ce7d592f3e42cda8ccbe764cff5eb6%7C0%7C0%7C638561289526257677%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=s28G3ArC9nR5S7J4j%2FV1ZujEnmYCbysbYke41r5svPw%3D&reserved=0 STEP 2: Click on "Start new test" and follow the instructions to complete the assessment. STEP 3: Once completed, please make a note of the AMCAT ID( Access you Amcat id by clicking 3 dots on top right corner of screen). NOTE: During registration, you'll be asked to provide the following information: Personal Details: Name, Email Address, Mobile Number, PAN number. Availability: Acknowledgement of work schedule preferences (Shifts, Work from Office, Rotational Weekends, 24/7 availability, Transport Boundary) and reason for career change. Employment Details: Current notice period and total annual compensation (CTC) in the format 390000 - 4 LPA (example). Candidate Information: 10-digit candidate ID starting with 100XXXXXXX, Gender, Source (e.g., Vendor name, Naukri/LinkedIn/Found it, or Direct), and Location Interview Mode: Walk-in Attempt all questions in the SHL Assessment app. The assessment is proctored, so choose a quiet environment. Use a headset or Bluetooth headphones for clear communication. A passing score is required for further interview rounds. 5 or above toggles, multi face detected, face not detected, or any malpractice will be considered rejected Once you've finished, submit the assessment and make a note of the AMCAT ID (15 Digit) used for the assessment. Documents to Carry: Please have a note of Candidate ID & AMCAT ID along with registered Email ID. Please do not carry laptops/cameras to the venue as these will not be allowed due to security restrictions. Please carry 2 set of updated Resume/CV (Hard Copy). Please carry original ID proof for security clearance. Please carry individual headphone/Bluetooth for the interview. Pointers to note: Please do not carry laptops/cameras to the venue as these will not be allowed due to security restrictions. Original Government ID card is must for Security Clearance. Regards, Infosys BPM Recruitment team. Show more Show less
Posted 1 day ago
0 years
0 Lacs
Mumbai Metropolitan Region
Remote
Role : Database Engineer Location : Remote Notice Period : 30 Days Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Show more Show less
Posted 1 day ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role: AWS Data Engineer JOB LOCATION : Chennai, Indore , Pune EXPERIENCE REQUIREMENT : 5+ Required Technical Skill: Strong Knowledge of Aws Glue/AWS REDSHIFT/SQL/ETL. Good knowledge and experience in Pyspark for forming complex Transformation logic. AWS Data Engineer, SQL,ETL, DWH , Secondary : AWS Glue , Airflow Must-Have Good Knowledge of SQL , ETL A minimum of 3 + years' experience and understanding of Python core concepts, and implementing data pipeline frameworks using PySpark and AWS. Work well independently as well as within a team Good knowledge in working with various AWS services including S3, Glue, DMS, Redshift. Proactive, organized, excellent analytical and problem-solving skills Flexible and willing to learn, can-do attitude is key Strong verbal and written communication skills Good-to-Have Good knowledge of SQL ,ETL ,understanding of Python core concepts, and implementing data pipeline frameworks using PySpark and AWS. Good knowledge in working with various AWS services including S3, Glue, DMS, Redshift Responsibility: AWS Data Engineer Pyspark / Python / SQL / ETL A minimum of 3 + years' experience and understanding of Python core concepts, and implementing data pipeline frameworks using PySpark and AWS Good knowledge of SQL, ETL and also working with various AWS services including S3, Glue, DMS, Redshift Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Company Name: Nectarbits Pvt Ltd Location: Gota, Ahmedabad Experience: 5+ years Job Description Responsible for the successful delivery and closure of multiple projects. Facilitates team activities, including daily stand-up meetings, grooming, sprint planning, demonstrations, release planning, and team retrospectives. Ensure team is aware of tasks and delivery dates. Developing project scopes and objectives, involving all relevant stakeholders and ensuring technical feasibility. To identify scope creep to ensure projects are on track, as well as judge commercial viability and actionable steps. Led sprint planning sessions and periodic conference calls with clients and senior team members to agree on the prioritization of projects and tasks. Be a central point of contact, and responsible for the projects handled and provide transparency & collaboration with different teams To represent the teams needs and requirements to the client to ensure timelines and quality delivery are practically achievable. Build a trusting and safe environment where problems can be raised and resolved. Understanding clients business and processes to provide effective solutions as a technology consultant. Report and escalate to management as needed. Quick learner and implementor of learning path for the Have : Must have hands-on development experience in Qa Automation & managing large-scale projects. Must have experience in managing new development projects with at least 8 to 10 people team with a duration of 6+ months (excluding ongoing support and maintenance projects/tasks), developing the project & release plan, adhering to the standard processes of the organization. Excellent verbal, and written communication skills with both technical and non-technical customers Strong understanding of architecture, design, and implementation of technical solutions. Extremely fluent in REST/SOAP APIs with JSON/XML. Experience in ETL is a plus. A good understanding of N-tier and Microservice architecture. Well-versed in Agile development methodology, and all its ceremonies. Excellent problem-solving/troubleshooting skills, particularly about anticipating and solving problems, issues, risks, or concerns before they become critical Prepare a clear and effective communications plan, and ensure proactive communication of all relevant information to the customer and to all stakeholders Experience in creating Wireframes and/or Presentation to effectively convey technology solutions to To Have : Assess and work with the sales team to create and review proposals, and contracts delivered to determine a proper project plan Show more Show less
Posted 1 day ago
7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Bangalore & Gurugram YOE - 7+ years We are seeking a talented Data Engineer with strong expertise in Databricks, specifically in Unity Catalog, PySpark, and SQL, to join our data team. You’ll play a key role in building secure, scalable data pipelines and implementing robust data governance strategies using Unity Catalog. Key Responsibilities: Design and implement ETL/ELT pipelines using Databricks and PySpark. Work with Unity Catalog to manage data governance, access controls, lineage, and auditing across data assets. Develop high-performance SQL queries and optimize Spark jobs. Collaborate with data scientists, analysts, and business stakeholders to understand data needs. Ensure data quality and compliance across all stages of the data lifecycle. Implement best practices for data security and lineage within the Databricks ecosystem. Participate in CI/CD, version control, and testing practices for data pipelines Required Skills: Proven experience with Databricks and Unity Catalog (data permissions, lineage, audits). Strong hands-on skills with PySpark and Spark SQL. Solid experience writing and optimizing complex SQL queries. Familiarity with Delta Lake, data lakehouse architecture, and data partitioning. Experience with cloud platforms like Azure or AWS. Understanding of data governance, RBAC, and data security standards. Preferred Qualifications: Databricks Certified Data Engineer Associate or Professional. Experience with tools like Airflow, Git, Azure Data Factory, or dbt. Exposure to streaming data and real-time processing. Knowledge of DevOps practices for data engineering. Interested candidate can submit details at https://forms.office.com/r/g2h52X7Bt9 Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Karnataka, India
On-site
Job Title: Senior Product Manager Location: Onsite - Bangalore, India Shift Timing: UK Shift (01:30 PM IST - 10:30 PM IST) About the Role: We are looking for a Senior Product Manager to take ownership of inbound product management for key enterprise SaaS offerings. This role involves driving product strategy, defining and managing requirements, coordinating with engineering and cross-functional teams, and ensuring timely, customer-focused releases. If you are passionate about solving real-world problems through innovative product development in the cloud and data integration space, we'd love to connect. Roles & Responsibilities: Lead the execution of the product roadmap, including go-to-market planning, product enhancements, and launch initiatives Translate market needs and customer feedback into detailed product requirements and specifications Conduct competitive analysis, assess industry trends, and define effective product positioning and pricing Collaborate with engineering teams to deliver high-quality solutions within defined timelines Create and maintain Product Requirement Documents (PRDs), functional specs, use cases, and internal presentation materials Evaluate build vs. buy options and engage in strategic partnerships to deliver comprehensive solutions Work closely with marketing to build sales enablement tools—product datasheets, pitch decks, whitepapers, and more Act as a domain expert by providing product training to internal teams such as sales, support, and services Join client interactions (calls and demos) to gather insights, validate solutions, and support adoption Ensure alignment between product vision, business goals, and technical feasibility throughout development cycles Skills & Qualifications: Minimum 5+ years of experience in product management for SaaS or enterprise software products Proven track record in delivering inbound-focused product strategy and leading full product lifecycles Experience with data integration, ETL, or cloud-based data platforms is highly desirable Strong working knowledge of cloud platforms like AWS, GCP, Azure, or Snowflake Familiarity with multi-tenant SaaS architectures and tools like Salesforce, NetSuite, etc Demonstrated ability to work in Agile environments with distributed development teams Exceptional analytical, communication, and stakeholder management skills Ability to prioritize effectively in fast-paced, evolving environments Bachelor's degree in Computer Science, Business Administration, or a related field. MBA preferred Experience in working with international teams or global product rollouts is a plus Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Test Engineer Location: Hyderabad (Onsite) Experience Required: 5 Years Job Description: We are looking for a detail-oriented and skilled Test Engineer with 5 years of experience in testing SAS applications and data pipelines . The ideal candidate should have a solid background in SAS programming , data validation , and test automation within enterprise data environments. Key Responsibilities: Conduct end-to-end testing of SAS applications and data pipelines to ensure accuracy and performance. Write and execute test cases/scripts using Base SAS, Macros, and SQL . Perform SQL query validation and data reconciliation using industry-standard practices. Validate ETL pipelines developed using tools like Talend, IBM Data Replicator , and Qlik Replicate . Conduct data integration testing with Snowflake and use explicit pass-through SQL to ensure integrity across platforms. Utilize test automation frameworks using Selenium, Python, or Shell scripting to increase test coverage and reduce manual efforts. Identify, document, and track bugs through resolution, ensuring high-quality deliverables. Required Skills: Strong experience in SAS programming (Base SAS, Macro) . Expertise in writing and validating SQL queries . Working knowledge of data testing frameworks and reconciliation tools . Experience with Snowflake and ETL validation tools like Talend, IBM Data Replicator, Qlik Replicate. Proficiency in test automation using Selenium , Python , or Shell scripts . Solid understanding of data pipelines and data integration testing practices. Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Greetings from Tata Consulting Services TCS is Hiring for Python Developer Experience : 5-10 years Location: Pune/Hyderabad/Bangalore/Chennai/Kochi/Bhubaneswar Please find the JD below Required Technical Skill - ETL Development experience. Must Have - 4+ years of experience in Python Show more Show less
Posted 1 day ago
8.0 years
0 Lacs
India
Remote
Job title : Data Engineer Experience: 5–8 Years Location: Remote Shift: IST (Indian Standard Time) Contract Type: Short-Term Contract Job Overview We are seeking an experienced Data Engineer with deep expertise in Microsoft Fabric to join our team on a short-term contract basis. You will play a pivotal role in designing and building scalable data solutions and enabling business insights in a modern cloud-first environment. The ideal candidate will have a passion for data architecture, strong hands-on technical skills, and the ability to translate business needs into robust technical solutions. Key Responsibilities Design and implement end-to-end data pipelines using Microsoft Fabric components (Data Factory, Dataflows Gen2). Build and maintain data models , semantic layers , and data marts for reporting and analytics. Develop and optimize SQL-based ETL processes integrating structured and unstructured data sources. Collaborate with BI teams to create effective Power BI datasets , dashboards, and reports. Ensure robust data integration across various platforms (on-premises and cloud). Implement mechanisms for data quality , validation, and error handling. Translate business requirements into scalable and maintainable technical solutions. Optimize data pipelines for performance and cost-efficiency . Provide technical mentorship to junior data engineers as needed. Required Skills Hands-on experience with Microsoft Fabric : Dataflows Gen2, Pipelines, OneLake. Strong proficiency in Power BI , including semantic modeling and dashboard/report creation. Deep understanding of data modeling techniques: star schema, snowflake schema, normalization, denormalization. Expertise in SQL , stored procedures, and query performance tuning. Experience integrating data from diverse sources: APIs, flat files, databases, and streaming. Knowledge of data governance , lineage, and data catalog tools within the Microsoft ecosystem. Strong problem-solving skills and ability to manage large-scale data workflows. Show more Show less
Posted 1 day ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Summary: We are looking for a skilled ETL Tester with hands-on experience in validating data pipelines and data transformations in an AWS-based ecosystem . The ideal candidate should have a strong background in ETL testing, a solid understanding of data warehousing concepts, and proficiency with tools and services in AWS like S3, Redshift, Glue, Athena, and Lambda. Key Responsibilities: Design and execute ETL test cases for data ingestion, transformation, and loading processes. Perform data validation and reconciliation across source systems, staging, and target layers (e.g., S3, Redshift, RDS). Understand data mappings and business rules; write SQL queries to validate transformation logic. Conduct end-to-end testing including functional, regression, and performance testing of ETL jobs. Work closely with developers, data engineers, and business analysts to identify and troubleshoot defects . Validate data pipelines orchestrated through AWS Glue, Step Functions, and Lambda functions . Utilize Athena and Redshift Spectrum for testing data stored in S3. Collaborate using tools like JIRA, Confluence, Git, and CI/CD pipelines . Prepare detailed test documentation including test plans, test cases, and test summary reports. Required Skills: 3–4 years of experience in ETL/Data Warehouse testing . Strong SQL skills for data validation across large datasets. Working knowledge of AWS services such as S3, Redshift, Glue, Athena, Lambda, CloudWatch. Experience testing batch and streaming data pipelines . Familiarity with Python or PySpark is a plus for data transformation or test automation. Experience in using ETL tools (e.g., Informatica, Talend, or AWS Glue ETL scripts). Knowledge of Agile/Scrum methodology . Understanding of data quality frameworks and test automation practices . Good to Have: Exposure to BI tools like QuickSight, Tableau, or Power BI. Basic understanding of data lake and data lakehouse architectures . Experience in working with JSON, Parquet , and other semi-structured data formats. Show more Show less
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The ETL (Extract, Transform, Load) job market in India is thriving with numerous opportunities for job seekers. ETL professionals play a crucial role in managing and analyzing data effectively for organizations across various industries. If you are considering a career in ETL, this article will provide you with valuable insights into the job market in India.
These cities are known for their thriving tech industries and often have a high demand for ETL professionals.
The average salary range for ETL professionals in India varies based on experience levels. Entry-level positions typically start at around ₹3-5 lakhs per annum, while experienced professionals can earn upwards of ₹10-15 lakhs per annum.
In the ETL field, a typical career path may include roles such as: - Junior ETL Developer - ETL Developer - Senior ETL Developer - ETL Tech Lead - ETL Architect
As you gain experience and expertise, you can progress to higher-level roles within the ETL domain.
Alongside ETL, professionals in this field are often expected to have skills in: - SQL - Data Warehousing - Data Modeling - ETL Tools (e.g., Informatica, Talend) - Database Management Systems (e.g., Oracle, SQL Server)
Having a strong foundation in these related skills can enhance your capabilities as an ETL professional.
Here are 25 interview questions that you may encounter in ETL job interviews:
As you explore ETL jobs in India, remember to showcase your skills and expertise confidently during interviews. With the right preparation and a solid understanding of ETL concepts, you can embark on a rewarding career in this dynamic field. Good luck with your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.