Jobs
Interviews

24278 Etl Jobs - Page 34

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Customer Success Our mission is to turn our customers into tech-savvy superheroes, ensuring they achieve success using our platform to meet their organization's business goals. If you're passionate about helping customers realize the value they seek with technology, then our customer success team is the right place for you Your Role As an Manager - Quality Analyst , you will be responsible for developing and supporting the planning/design/execution of test plans, test scripts. The successful candidate will work closely with various departments to perform and validate test cases based on quality requirements, and recommend changes to predetermined quality guidelines. You will be responsible for ensuring that the end product meets the appropriate quality standards, is fully functional and user-friendly. A Day in the Life Review requirements, specifications and technical design documents to provide timely and meaningful feedback Create detailed, comprehensive and well-structured test plans and test cases Estimate, prioritize, plan and coordinate testing activities Design, develop and execute automation scripts using open source tools Identify, record, document thoroughly and track bugs Perform thorough regression testing when bugs are resolved Develop and apply testing processes for new and existing products to meet client needs Liaise with internal teams (e.g. developers and product managers) to identify system requirements Monitor debugging process results Investigate the causes of non-conforming software and train users to implement solutions Track quality assurance metrics, like defect densities and open defect counts Stay up-to-date with new testing tools and test strategies What You Need Proven work experience of 10+ years in software quality assurance Strong knowledge of software QA methodologies, tools and processes Experience in writing clear, concise and comprehensive test plans and test cases Experience in testing data validation scenarios and data ingestion, pipelines, and transformation processes (e.g.,ETL) Ability to validate data mappings - ETL Transformations, Business validations and Aggregation/Analytical checks Strong working experience in SQL and Must be proficient in writing SQL Queries API Testing - REST/SOAP, Postman, Pycharm, Pytest Experience working in an Agile/Scrum development process US Healthcare Data experience preferably in Value-Based Care and strong healthcare data background - clinical, claims, FHIR, HL7, X12, CCDA etc Experience in reconciling the data from Source to Target We offer competitive benefits to set you up for success in and outside of work. Here's What We Offer Generous Leaves: Enjoy generous leave benefits of up to 40 days Parental Leave: Leverage one of industry's best parental leave policies to spend time with your new addition Sabbatical: Want to focus on skill development, pursue an academic career, or just take a break? We've got you covered Health Insurance: We offer comprehensive health insurance to support you and your family, covering medical expenses related to illness, disease, or injury. Extending support to the family members who matter most Care Program: Whether it's a celebration or a time of need, we've got you covered with care vouchers to mark major life events. Through our Care Vouchers program, employees receive thoughtful gestures for significant personal milestones and moments of need Financial Assistance: Life happens, and when it does, we're here to help. Our financial assistance policy offers support through salary advances and personal loans for genuine personal needs, ensuring help is there when you need it most Innovaccer is an equal-opportunity employer. We celebrate diversity, and we are committed to fostering an inclusive and diverse workplace where all employees, regardless of race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, marital status, or veteran status, feel valued and empowered. Disclaimer : Innovaccer does not charge fees or require payment from individuals or agencies for securing employment with us. We do not guarantee job spots or engage in any financial transactions related to employment. If you encounter any posts or requests asking for payment or personal information, we strongly advise you to report them immediately to our HR department at px@innovaccer.com. Additionally, please exercise caution and verify the authenticity of any requests before disclosing personal and confidential information, including bank account details. About Innovaccer Innovaccer activates the flow of healthcare data, empowering providers, payers, and government organizations to deliver intelligent and connected experiences that advance health outcomes. The Healthcare Intelligence Cloud equips every stakeholder in the patient journey to turn fragmented data into proactive, coordinated actions that elevate the quality of care and drive operational performance. Leading healthcare organizations like CommonSpirit Health, Atlantic Health, and Banner Health trust Innovaccer to integrate a system of intelligence into their existing infrastructure, extending the human touch in healthcare. For more information, visit www.innovaccer.com. Check us out on YouTube , Glassdoor , LinkedIn , Instagram , and the Web .

Posted 1 week ago

Apply

8.0 years

0 Lacs

India

On-site

Job Description As a Sr Software Engineer (SAP) in Data Migration we expect this developer to participate in project data migration activities to support the successful data conversion from ECC to S/4HANA (Green field implementation). This role requires a strong understanding of S/4HANA data model (Master, Transactional). The developer will be working closely with cross-functional teams, ensuring data integrity, and enabling a smooth transition from ECC systems to the S/4HANA platform. Responsibilities include, but are not limited to: Assess and analyze existing data structures in ECC systems. Perform data profiling to identify inconsistencies, redundancies, and data quality issues. Design and execute ETL processes (BODS, RFC, Custom ABAP programs/Queries) to extract data from ECC systems. Perform data transformation and cleansing to align with S/4HANA data models. Ensure successful data loading into S/4HANA using tools like SAP Migration Cockpit, LSMW, or third-party tools. Implement data validation rules and reconciliation processes to ensure accuracy. Conduct pre- and post-migration data validation to guarantee data integrity. Work closely with business stakeholders/functional consultants to understand data migration requirements. Collaborate with functional and technical teams to align data migration activities with overall project goals. Support data migration testing cycles, including unit testing, integration testing, and user acceptance testing (UAT). Address and resolve data-related issues during testing and post-go-live phases. Ensure compliance with data governance and security standards. Qualifications Technical Skills Total 8 years of relevant ABAP experience with minimum of 4 years of hands-on data migration experience Proficiency in SAP S/4HANA data models and structures. Solid understanding of ETL processes and data migration best practices. Knowledge of SQL and data profiling tools. Able to develop queries/programs for data extraction using SQL/ABAP/BODS/RFC from ECC. Strong experience with SAP Data Migration tools (e.g., SAP Data Services, SAP Migration Cockpit, LTMOM, LSMW). Knowledge of Data Cleansing/Cleansing Burndown. Understand various data loading techniques (BAPIs, BDCs, IDocs, ALEs, APIs, Direct Update Programs, eCATTs, or external recording like Winshuttle ) and use it in conjunction with Data Migration tools or create ABAP program for it. Nice to have MDG Experience.

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Work Level : Individual Core : Responsible Leadership : Team Alignment Industry Type : Information Technology Function : Database Administrator Key Skills : mSQL,SQL Writing,PLSQL Education : Graduate Note: This is a requirement for one of the Workassist Hiring Partner. Primary Responsibility: Collect, clean, and analyze data from various sources. Assist in creating dashboards, reports, and visualizations. We are looking for a SQL Developer Intern to join our team remotely. As an intern, you will work with our database team to design, optimize, and maintain databases while gaining hands-on experience in SQL development. This is a great opportunity for someone eager to build a strong foundation in database management and data analysis. Responsibilities Write, optimize, and maintain SQL queries, stored procedures, and functions. This is a Remote Position. Assist in designing and managing relational databases. Perform data extraction, transformation, and loading (ETL) tasks. Ensure database integrity, security, and performance. Work with developers to integrate databases into applications. Support data analysis and reporting by writing complex queries. Document database structures, processes, and best practices. Requirements Currently pursuing or recently completed a degree in Computer Science, Information Technology, or a related field. Strong understanding of SQL and relational database concepts. Experience with databases such as MySQL, PostgreSQL, SQL Server, or Oracle. Ability to write efficient and optimized SQL queries. Basic knowledge of indexing, stored procedures, and triggers. Understanding of database normalization and design principles. Good analytical and problem-solving skills. Ability to work independently and in a team in a remote setting. Preferred Skills (Nice to Have) Experience with ETL processes and data warehousing. Knowledge of cloud-based databases (AWS RDS, Google BigQuery, Azure SQL). Familiarity with database performance tuning and indexing strategies. Exposure to Python or other scripting languages for database automation. Experience with business intelligence (BI) tools like Power BI or Tableau. Company Description Workassist is an online recruitment and employment solution platform based in Lucknow, India. We provide relevant profiles to employers and connect job seekers with the best opportunities across various industries. With a network of over 10,000+ recruiters, we help employers recruit talented individuals from sectors such as Banking & Finance, Consulting, Sales & Marketing, HR, IT, Operations, and Legal. We have adapted to the new normal and strive to provide a seamless job search experience for job seekers worldwide. Our goal is to enhance the job seeking experience by leveraging technology and matching job seekers with the right employers. For a seamless job search experience, visit our website: https://bit.ly/3QBfBU2 (Note: There are many more opportunities apart from this on the portal. Depending on the skills, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

🚨 Urgent Hiring: Python Developer - Full time 💼 Immediate Joiners Only (0-15 Days NP) 🚨 📍 Location: Chennai & Pune (Preferred) 💼 Experience: 6+Yrs 💰 CTC: Up to ₹21 LPA 🕒 Join Within: Next 5 Days We're looking for Python Developers with strong experience in data warehousing applications, ideally with: ✅ 3-4 yrs Python (Pandas, Polars, etc.) ✅ 1-2 yrs Talend ETL tool (flexible) ✅ 1-2 yrs SQL/PLSQL development ⚡ Immediate joiners or serving notice period (0–15 days) only! 📩 DM me or share profiles ASAP. !: rajesh@reveilletechnologies.com./

Posted 1 week ago

Apply

0.0 - 10.0 years

20 - 30 Lacs

Hyderabad, Telangana

On-site

Job Title: SAP FICO Consultant ( Carve out) Experience required: 10+ Years Location: Hyderabad Work mode: Onsite Availability: immediate to 15 days Job Description: All the candidates must have worked on Carve-out 10+ years of experience in SAP FICO implementation and support. At least 2–3 full-lifecycle carve-out projects or M&A separation projects in SAP environment. Strong understanding of SAP Financial Accounting and Controlling, including: GL, AP, AR, Asset Accounting Cost Center Accounting, Internal Orders, Product Costing, and Profitability Analysis (COPA) Experience with SAP S/4HANA is highly desirable. Deep knowledge of legal entity structuring, company code creation, and data partitioning. Experience with cross-module integration (SD, MM, PP). Strong data migration, cleansing, and mapping skills. Excellent communication and stakeholder management skills. Understanding of compliance (IFRS/GAAP), SOX controls, and audit readiness during separation. Responsibilities: Lead or support SAP FICO stream in carve-out or divestiture projects, ensuring smooth financial separation and reporting. Perform financial impact analysis, legal entity setup, and company code restructuring. Design and configure SAP FICO modules (GL, AR, AP, AA, CO, PCA, CCA, COPA) for the new entity or separated business unit. Manage data separation, including historical and open financial transactions, master data, and cost objects. Work with SAP Migration tools (LSMW, BODS, or third-party ETL tools) to extract and transform financial data for the new entity. Coordinate closely with the Basis, Security, SD/MM/PP teams, and external stakeholders to ensure complete functional carve-out. Support cutover planning, testing (SIT/UAT), and hyper care phases. Provide advisory support on taxation, intercompany transactions, and financial consolidation implications. Document business process design, configurations, and user guides. Job Types: Full-time, Permanent Pay: ₹2,000,000.00 - ₹3,000,000.00 per year Schedule: Day shift Experience: SAP Finance & Controlling: 10 years (Required) SAP S/4HANA: 8 years (Required) Data migration: 10 years (Required) Carve-Out Project: 4 years (Required) SAP FICO: 10 years (Required) Location: Hyderabad, Telangana (Preferred) Work Location: In person

Posted 1 week ago

Apply

50.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Employer : Global Product Company - Established 1969 Why Join Us? Be part of a global product company with over 50 years of innovation. Work in a collaborative and growth-oriented environment . Help shape the future of digital products in a rapidly evolving industry. Job Title : Power BI Developer Job Location : Marathahalli , Bangalore(Hybrid) Exp Range : 5 to 9 years 📊 We're Hiring: BI Developer – Power BI Specialist 🚀 Are you passionate about transforming data into actionable insights? Do you thrive in collaborative environments and enjoy building scalable, high-performance BI solutions? We’re looking for a BI Developer to lead the design, development, and deployment of advanced dashboards and data models using Power BI and other cutting-edge technologies. You’ll work closely with teams across India and the US , driving impactful data solutions for internal and external stakeholders. 🔍 What You’ll Do: Analyze business requirements and translate them into robust BI solutions. Design and develop interactive dashboards, reports, and visualizations using Power BI. Build and optimize data models, ETL processes, and data pipelines . Ensure data accuracy, integrity, and security across the BI ecosystem. Conduct unit testing , troubleshoot issues, and support deployment across environments. Provide technical support and guidance to end-users. Stay current with emerging BI technologies and best practices. Collaborate with cross-functional teams to ensure successful project delivery. Document technical specifications and development processes. 🛠️ What You Bring: 5+ years of experience in BI development or data analysis . 3+ years of hands-on experience with Power BI . Strong skills in SQL, data modeling , and building visually appealing dashboards. Experience with DevOps, CI/CD pipelines, Git, Jira , and Agile methodologies. Excellent problem-solving , analytical thinking, and communication skills. Ability to read and understand other developers’ code and provide support. Bachelor's degree in Computer Science, IT, or related field (or equivalent experience). Relevant certifications are a plus! 🌟 Top Skills: Power BI | Data Modeling | SQL | ETL & Pipelines | Dashboard Development | DevOps & CI/CD | Debugging & Testing | Stakeholder Collaboration

Posted 1 week ago

Apply

6.0 years

0 Lacs

Udaipur, Rajasthan, India

On-site

Role: Senior Data Engineer Experience: 4-6 Yrs Location: Udaipur , Jaipur Job Description: We are looking for a highly skilled and experienced Data Engineer with 4–6 years of hands-on experience in designing and implementing robust, scalable data pipelines and infrastructure. The ideal candidate will be proficient in SQL and Python and have a strong understanding of modern data engineering practices. You will play a key role in building and optimizing data systems, enabling data accessibility and analytics across the organization, and collaborating closely with cross-functional teams including Data Science, Product, and Engineering. Key Responsibilities: Design, develop, and maintain scalable ETL/ELT data pipelines using SQL and Python Collaborate with data analysts, data scientists, and product teams to understand data needs Optimize queries and data models for performance and reliability Integrate data from various sources, including APIs, internal databases, and third-party systems Monitor and troubleshoot data pipelines to ensure data quality and integrity Document processes, data flows, and system architecture Participate in code reviews and contribute to a culture of continuous improvement Required Skills: 4–6 years of experience in data engineering, data architecture, or backend development with a focus on data Strong command of SQL for data transformation and performance tuning Experience with Python (e.g., pandas, Spark, ADF) Solid understanding of ETL/ELT processes and data pipeline orchestration Proficiency with RDBMS (e.g., PostgreSQL, MySQL, SQL Server) Experience with data warehousing solutions (e.g., Snowflake, Redshift, BigQuery) Familiarity with version control (Git), CI/CD workflows, and containerized environments (Docker, Kubernetes) Basic Programming Skills Excellent problem-solving skills and a passion for clean, efficient data systems Preferred Skills: Experience with cloud platforms (AWS, Azure, GCP) and services like S3, Glue, Dataflow, etc. Exposure to enterprise solutions (e.g., Databricks, Synapse) Knowledge of big data technologies (e.g., Spark, Kafka, Hadoop) Background in real-time data streaming and event-driven architectures Understanding of data governance, security, and compliance best practices Prior experience working in agile development environment Educational Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field. Visit us: https://kadellabs.com/ https://in.linkedin.com/company/kadel-labs https://www.glassdoor.co.in/Overview/Working-at-Kadel-Labs-EI_IE4991279.11,21.htm

Posted 1 week ago

Apply

6.0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

JOB DESCRIPTION: DATA ENGINEER (Databricks & AWS) Overview: As a Data Engineer, you will work with multiple teams to deliver solutions on the AWS Cloud using core cloud data engineering tools such as Databricks on AWS, AWS Glue, Amazon Redshift, Athena, and other Big Data-related technologies. This role focuses on building the next generation of application-level data platforms and improving recent implementations. Hands-on experience with Apache Spark (PySpark, SparkSQL), Delta Lake, Iceberg, and Databricks is essential. Locations: Jaipur, Pune, Hyderabad, Bangalore, Noida. Responsibilities: • Define, design, develop, and test software components/applications using AWS-native data services: Databricks on AWS, AWS Glue, Amazon S3, Amazon Redshift, Athena, AWS Lambda, Secrets Manager • Build and maintain ETL/ELT pipelines for both batch and streaming data. • Work with structured and unstructured datasets at scale. • Apply Data Modeling principles and advanced SQL techniques. • Implement and manage pipelines using Apache Spark (PySpark, SparkSQL) and Delta Lake/Iceberg formats. • Collaborate with product teams to understand requirements and deliver optimized data solutions. • Utilize CI/CD pipelines with DBX and AWS for continuous delivery and deployment of Databricks code. • Work independently with minimal supervision and strong ownership of deliverables. Must Have: • 6+ years of experience in Data Engineering on AWS Cloud. • Hands-on expertise in: o Apache Spark (PySpark, SparkSQL) o Delta Lake / Iceberg formats o Databricks on AWS o AWS Glue, Amazon Athena, Amazon Redshift • Strong SQL skills and performance tuning experience on large datasets. • Good understanding of CI/CD pipelines, especially using DBX and AWS tools. • Experience with environment setup, cluster management, user roles, and authentication in Databricks. • Certified as a Databricks Certified Data Engineer – Professional (mandatory). Good To Have: • Experience migrating ETL pipelines from on-premise or other clouds to AWS Databricks. • Experience with Databricks ML or Spark 3.x upgrades. • Familiarity with Airflow, Step Functions, or other orchestration tools. • Experience integrating Databricks with AWS services in a secured, production-ready environment. • Experience with monitoring and cost optimization in AWS. Key Skills: • Languages: Python, SQL, PySpark • Big Data Tools: Apache Spark, Delta Lake, Iceberg • Databricks on AWS • AWS Services: AWS Glue, Athena, Redshift, Lambda, S3, Secrets Manager • Version Control & CI/CD: Git, DBX, AWS CodePipeline/CodeBuild • Other: Data Modeling, ETL Methodology, Performance Optimization

Posted 1 week ago

Apply

0 years

0 Lacs

India

Remote

Company Description ThreatXIntel is a startup cyber security company specializing in protecting businesses and organizations from cyber threats. Our tailored services include cloud security, web and mobile security testing, cloud security assessment, and DevSecOps. We prioritize delivering affordable solutions that cater to the specific needs of our clients, regardless of their size. Our proactive approach to security involves continuous monitoring and testing to identify vulnerabilities before they can be exploited. Role Description We are seeking an experienced GCP Data Engineer for a contract engagement focused on building, optimizing, and maintaining high-scale data processing pipelines using Google Cloud Platform services . You’ll work on designing robust ETL/ELT solutions, transforming large data sets, and enabling analytics for critical business functions. This role is ideal for a hands-on engineer with strong expertise in BigQuery , Cloud Composer (Airflow) , Python , and Cloud SQL/PostgreSQL , with experience in distributed data environments and orchestration tools. Key Responsibilities Design, develop, and maintain scalable data pipelines and ETL/ELT workflows using GCP Composer (Apache Airflow) Work with BigQuery , Cloud SQL , and PostgreSQL to manage and optimize data storage and retrieval Build automation scripts and data transformations using Python (PySpark knowledge is a strong plus) Optimize queries for large-scale, distributed data processing systems Collaborate with cross-functional teams to translate business and analytics requirements into scalable technical solutions Support data ingestion from multiple structured and semi-structured sources including Hive , MySQL , and NoSQL databases Apply HDFS and distributed file system experience where necessary Ensure data quality, reliability, and consistency across platforms Provide ongoing maintenance and support for deployed pipelines and services Required Qualifications Strong hands-on experience with GCP services , particularly: BigQuery Cloud Composer (Apache Airflow) Cloud SQL / PostgreSQL Proficiency in Python for scripting and data pipeline development Experience in designing & optimizing high-volume data processing workflows Good understanding of distributed systems , HDFS , and parallel processing frameworks Strong analytical and problem-solving skills Ability to work independently and collaborate across remote teams Excellent communication skills for technical and non-technical audiences Preferred Skills Knowledge of PySpark for big data processing Familiarity with Hive , MySQL , and NoSQL databases Experience with Java in a data engineering context Exposure to data governance, access control, and cost optimization on GCP Prior experience in a contract or freelance capacity with enterprise clients

Posted 1 week ago

Apply

8.0 years

0 Lacs

India

Remote

Job Title: Senior Python Developer Experience Required: 8+ years Location: Remote Job Type: Full-Time Job Summary: We are looking for a highly experienced and motivated Senior Python Developer with 8+ years of hands-on experience in developing scalable, high-performance applications. The ideal candidate will have a strong background in backend development, API integration, and cloud services, along with a solid understanding of system architecture and DevOps practices. Key Responsibilities: Design, develop, and maintain efficient, reusable, and reliable Python code. Build RESTful APIs and integrate third-party services. Work with relational (PostgreSQL/MySQL) and NoSQL (MongoDB/Redis) databases. Lead system architecture discussions and code reviews. Collaborate with frontend developers, DevOps, and other stakeholders. Implement automated testing platforms and unit tests. Ensure the performance, quality, and responsiveness of applications. Troubleshoot, debug, and optimize existing systems. Participate in Agile/Scrum development processes. Mentor junior developers and contribute to knowledge sharing. Develop and maintain CI/CD pipelines (e.g., Jenkins, GitLab CI). Write technical documentation for internal and external use. Work with cloud platforms like AWS, Azure, or GCP. Ensure adherence to best coding practices, security standards, and compliance. Continuously explore new technologies to improve existing systems. Required Skills: Expert-level proficiency in Python 3.x Strong experience with Django , Flask , or FastAPI Experience with RESTful APIs , GraphQL is a plus Solid understanding of ORMs , database schema design, and performance tuning Hands-on experience with Docker , Kubernetes (preferred) Familiarity with message brokers like RabbitMQ, Kafka, or Celery Experience with unit testing , pytest , and TDD Version control with Git , code reviews, and branching strategies Knowledge of security best practices and OAuth2/JWT Exposure to DevOps and cloud infrastructure tools (AWS/GCP) Preferred Qualifications: Bachelor’s/Master’s degree in Computer Science, Engineering, or related field Experience in microservices architecture Knowledge of data engineering pipelines or ETL processes Familiarity with AI/ML frameworks is a plus Open-source contributions or personal GitHub portfolio

Posted 1 week ago

Apply

12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Position : Power BI Architect Location : Hyderabad, Telangana, India Experience : 8–12 years Role Overview You will architect and deliver end‑to‑end enterprise BI solutions. This includes data ingestion, transformation, modelling, and dashboard/report development with Power BI. You will collaborate closely with stakeholders and lead junior team members to ensure high‑quality insights at scale. Key Responsibilities Architecture & Design Design scalable BI architectures, including semantic layers, data models, ETL/ELT pipelines, dashboards, and embedded analytics platforms. Data Integration & ETL Ingest, transform, and cleanse data from multiple sources (SQL, Oracle, Azure Synapse/Data Lake/Fabric, AWS services). Modeling & Query Optimization Build robust data models; write optimized DAX expressions and Power Query M code; apply performance tuning best practices. Solution Delivery Develop reports and dashboards using Power BI Desktop and Service, implement row-level/object-level security (RLS/OLS), capacity planning, and self-service BI frameworks. Cross-Platform Competency Collaborate with teams using MicroStrategy and Tableau; advise on best‑fit tools where relevant. Governance, Documentation & Quality Maintain data dictionaries, metadata, source‑to‑target mappings; support data governance initiatives. Leadership & Stakeholder Management Manage small to mid-sized developer teams, mentor juniors, engage with business users, and support pre-sales or proposal efforts. Required Qualifications & Skills Bachelor’s/Master’s degree in CS, IT, or related field. 8–12 years overall, with 5+ years of hands‑on Power BI architecture and development experience. Deep proficiency with Power BI Desktop & Service, DAX, Power Query (M), SQL/SSIS, and OLAP/tabular modeling. Strong experience in Azure frameworks such as Synapse, Fabric, and cloud-based ETL/data pipelines; AWS exposure is a plus. Experience with Tableau/MicroStrategy or other BI tools. Familiarity with Python or R for data transformations or analytics. Certification like Microsoft Certified: Power BI/Data Analyst Associate preferred. Excellent verbal and written communication skills; stakeholder-facing experience mandatory.

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Work Level : Individual Core : Responsible Leadership : Team Alignment Industry Type : Information Technology Function : Database Administrator Key Skills : mSQL,SQL Writing,PLSQL Education : Graduate Note: This is a requirement for one of the Workassist Hiring Partner. Responsibilities: Write, optimize, and maintain SQL queries, stored procedures, and functions. SQL Developer Intern India This is a Remote Position. Assist in designing and managing relational databases. Perform data extraction, transformation, and loading (ETL) tasks. Ensure database integrity, security, and performance. Work with developers to integrate databases into applications. Support data analysis and reporting by writing complex queries. Document database structures, processes, and best practices. Company Description Workassist is an online recruitment and employment solution platform based in Lucknow, India. We provide relevant profiles to employers and connect job seekers with the best opportunities across various industries. With a network of over 10,000+ recruiters, we help employers recruit talented individuals from sectors such as Banking & Finance, Consulting, Sales & Marketing, HR, IT, Operations, and Legal. We have adapted to the new normal and strive to provide a seamless job search experience for job seekers worldwide. Our goal is to enhance the job seeking experience by leveraging technology and matching job seekers with the right employers. For a seamless job search experience, visit our website: https://bit.ly/3QBfBU2 (Note: There are many more opportunities apart from this on the portal. Depending on the skills, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

About Company Papigen is a fast-growing global technology services company, delivering innovative digital solutions through deep industry experience and cutting-edge expertise. We specialize in technology transformation, enterprise modernization, and dynamic areas like Cloud, Big Data, Java, React, DevOps, and more. Our client-centric approach combines consulting, engineering, and data science to help businesses evolve and scale efficiently. About The Role We are seeking an experienced Senior Data QA Analyst to support data integration, transformation, and reporting validation for enterprise-scale systems. This role involves close collaboration with data engineers, business analysts, and stakeholders to ensure the quality, accuracy, and reliability of data workflows, especially in Azure Data Bricks and ETL pipelines . Key Responsibilities Test Planning and Execution: Collaborate with Business Analysts and Data Engineers to understand requirements and translate them into test scenarios and test case Develop and execute comprehensive test plans and test scripts for data validation Log and manage defects using tools like Azure DevOps Support UAT and post-go-live smoke testing Data Integration Validation Understand data architecture and workflows, including ETL processes and data movement Write and execute complex SQL queries to validate data accuracy, completeness, and consistency Ensure correctness of data transformations and mappings based on business logic Report Testing Validate the structure, metrics, and content of BI reports Perform cross-checks of report outputs against source systems Ensure reports reflect accurate calculations and align with business requirements Required Skills & Experience Bachelor’s degree in IT, Computer Science, MIS, or related field 8+ years of experience in QA, especially in data validation or data warehouse testing Strong hands-on experience with SQL and data analysis Proven experience working with Azure Data Bricks, Python, and PySpark (preferred) Familiarity with data models like Data Marts, EDW, and Operational Data Stores Excellent understanding of data transformation, mapping logic, and BI validation Experience with test case documentation, defect tracking, and Agile methodologies Strong verbal and written communication skills, with the ability to work in a cross-functional environment Benefit And Perks Opportunity to work with leading global clients Exposure to modern technology stacks and tools Supportive and collaborative team environment Continuous learning and career development opportunities Skills: etl,agile methodologies,test case design,agile,databricks,data integration,operational data stores,azure data bricks,test planning,sql,testing,edw,defect tracking,data validation,python,etl testing,pyspark,data analysis,data marts,test case documentation,data warehousing

Posted 1 week ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Title: Data Engineer Location: Noida Experience: 3+ years Job Description: We are seeking a skilled and experienced Data Engineer to join our dynamic team. The ideal candidate will have a strong background in data engineering, with a focus on PySpark, Python, and SQL. Experience with Azure Databricks is a plus. Key Responsibilities: Design, develop, and maintain scalable data pipelines and systems. Work closely with data scientists and analysts to ensure data quality and availability. Implement data integration and transformation processes using PySpark and Python. Optimize and maintain SQL databases and queries. Collaborate with cross-functional teams to understand data requirements and deliver solutions. Monitor and troubleshoot data pipeline issues to ensure data integrity and performance. Required Skills and Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field. 3+ years of experience in data engineering. Proficiency in PySpark, Python, and SQL. Experience with Azure Databricks is a plus. Strong problem-solving skills and attention to detail. Excellent communication and teamwork abilities. Preferred Qualifications: Experience with cloud platforms such as Azure, AWS, or Google Cloud. Knowledge of data warehousing concepts and technologies. Familiarity with ETL tools and processes. How to Apply: Apart from Easy apply on Linkedin also Click on this link 🡪https://forms.office.com/r/N0nYycJ36P #DataEngineer #Hiring #JobOpening #PySpark #Python #SQL #AzureDatabricks #TechJobs #DataEngineering #CareerOpportunity

Posted 1 week ago

Apply

10.0 years

0 Lacs

India

On-site

Company Description ThreatXIntel is a startup cyber security company that provides customized, affordable solutions to protect businesses and organizations from cyber threats. The team offers services such as cloud security, web and mobile security testing, cloud security assessment, and DevSecOps. ThreatXIntel takes a proactive approach to security by continuously monitoring and testing clients' digital environments to identify vulnerabilities before they can be exploited. Role Description We’re seeking a hands-on IT Integration (QA Automation) to lead and execute QA automation, ETL testing, and API testing initiatives during a major post-merger systems integration project. Our product-based company is undergoing a multi-system transformation, including: Migrating QA automation from Selenium and Playwright Transitioning financials from Sedona to Oracle NetSuite Integrating with Salesforce , Power BI , and other enterprise tools This freelance role is ideal for a senior QA professional with a strong technical foundation in ETL and API testing , deep automation experience, and a strategic mindset for enterprise-level integration projects. Key Responsibilities Lead and execute ETL testing strategy across multiple data pipelines and systems Own and develop API testing frameworks using Postman and Rest Assured Migrate or modernize automation frameworks from Selenium and Playwright Coordinate QA integration between Salesforce , Oracle NetSuite , and other platforms Work with DevOps teams on CI/CD pipelines using Jenkins and Azure DevOps Provide oversight on performance testing and database validation (SQL) Collaborate with cross-functional teams (Product, Engineering, Finance) to ensure quality across systems Establish testing dashboards and reporting using Power BI or similar tools Required Skills & Experience 10+ years in QA and automation testing, with leadership or director-level experience Strong hands-on expertise in ETL testing (data mapping, validation, reconciliation) Solid knowledge of API testing with Postman and Rest Assured Experience with Selenium , Playwright , and transitioning automation frameworks Understanding of financial system integrations , ideally Sedona to Oracle NetSuite Experience working with Salesforce integrations and validation Strong SQL skills for database validation Familiarity with CI/CD tools: Jenkins, Azure DevOps Experience with performance testing tools and techniques Knowledge of Power BI or other BI tools for QA dashboards Preferred Qualifications Experience in post-merger system integration projects Background in product-based companies or fast-paced tech environments Experience with QA strategies in regulated or finance-heavy domains Excellent communication and stakeholder management skills

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Senior Data Engineer at our Bangalore office, you will play a crucial role in developing data pipeline solutions to meet business data needs. Your responsibilities will involve designing, implementing, and maintaining structured and semi-structured data models, utilizing Python and SQL for data collection, enrichment, and cleansing. Additionally, you will create data APIs in Python Flask containers, leverage AI for analytics, and build data visualizations and dashboards using Tableau. Your expertise in infrastructure as code (Terraform) and executing automated deployment processes will be vital for optimizing solutions for costs and performance. You will collaborate with business analysts to gather stakeholder requirements and translate them into detailed technical specifications. Furthermore, you will be expected to stay updated on the latest technical advancements, particularly in the field of GenAI, and recommend changes based on the evolving landscape of Data Engineering and AI. Your ability to embrace change, share knowledge with team members, and continuously learn will be essential for success in this role. To qualify for this position, you should have at least 5 years of experience in data engineering, with a focus on Python programming, data pipeline development, and API design. Proficiency in SQL, hands-on experience with Docker, and familiarity with various relational and NoSQL databases are required. Strong knowledge of data warehousing concepts, ETL processes, and data modeling techniques is crucial, along with excellent problem-solving skills and attention to detail. Experience with cloud-based data storage and processing platforms like AWS, GCP, or Azure is preferred. Bonus skills such as being a GenAI prompt engineer, proficiency in Machine Learning technologies like TensorFlow or PyTorch, knowledge of big data technologies, and experience with data visualization tools like Tableau, Power BI, or Looker will be advantageous. Familiarity with Pandas, Spacy, NLP libraries, agile development methodologies, and optimizing data pipelines for costs and performance are also desirable. Effective communication and collaboration skills in English are essential for interacting with technical and non-technical stakeholders. You should be able to translate complex ideas into simple examples to ensure clear understanding among team members. A bachelor's degree in computer science, IT, engineering, or a related field is required, along with relevant certifications in BI, AI, data engineering, or data visualization tools. The role will be based at The Leela Office on Airport Road, Kodihalli, Bangalore, with a hybrid work schedule allowing you to work from the office on Tuesdays, Wednesdays, Thursdays, and from home on Mondays and Fridays. If you are passionate about turning complex data into valuable insights and have experience in mentoring junior members and collaborating with peers, we encourage you to apply for this exciting opportunity.,

Posted 1 week ago

Apply

3.0 - 9.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Category: Testing/Quality Assurance Main location: India, Karnataka, Bangalore Position ID: J0725-1442 Employment Type: Full Time Position Description: Position Description Company Profile: Founded in 1976, CGI is among the largest independent IT and business consulting services firms in the world. With 94,000 consultants and professionals across the globe, CGI delivers an end-to-end portfolio of capabilities, from strategic IT and business consulting to systems integration, managed IT and business process services and intellectual property solutions. CGI works with clients through a local relationship model complemented by a global delivery network that helps clients digitally transform their organizations and accelerate results. CGI Fiscal 2024 reported revenue is CA$14.68 billion and CGI shares are listed on the TSX (GIB.A) and the NYSE (GIB). Learn more at cgi.com. Job Title: ETL Testing Engineer Position: Senior test engineer Experience: 3-9 Years Category: Quality assurance/Software Testing. Shift: 1-10 pm/UK Shift Main location: Chennai/Bangalore. Position ID: J0725-1442 Employment Type: Full Time Position Description: We are looking for an experienced DataStage tester to join our team. The ideal candidate should be passionate about coding and testing scalable and high-performance applications Your future duties and responsibilities: Develop and execute ETL test cases to validate data extraction, transformation, and loading processes. Write complex SQL queries to verify data integrity, consistency, and correctness across source and target systems. Automate ETL testing workflows using Python, PyTest, or other testing frameworks. Perform data reconciliation, schema validation, and data quality checks. Identify and report data anomalies, performance bottlenecks, and defects. Work closely with Data Engineers, Analysts, and Business Teams to understand data requirements. Design and maintain test data sets for validation. Implement CI/CD pipelines for automated ETL testing (Jenkins, GitLab CI, etc.). Document test results, defects, and validation reports. Required qualifications to be successful in this role: ETL Testing: Strong experience in testing Informatica, Talend, SSIS, Databricks, or similar ETL tools. SQL: Advanced SQL skills (joins, aggregations, subqueries, stored procedures). Python: Proficiency in Python for test automation (Pandas, PySpark, PyTest). Databases: Hands-on experience with RDBMS (Oracle, SQL Server, PostgreSQL) & NoSQL (MongoDB, Cassandra). Big Data Testing (Good to Have): Hadoop, Hive, Spark, Kafka. Testing Tools: Knowledge of Selenium, Airflow, Great Expectations, or similar frameworks. Version Control: Git, GitHub/GitLab. CI/CD: Jenkins, Azure DevOps, or similar. Soft Skills: Strong analytical and problem-solving skills. Ability to work in Agile/Scrum environments. Good communication skills for cross-functional collaboration. Preferred Qualifications: Experience with cloud platforms (AWS, Azure). Knowledge of Data Warehousing concepts (Star Schema, Snowflake Schema). Certification in ETL Testing, SQL, or Python is a plus. Skills: Data Warehousing MS SQL Server Python What you can expect from us: Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.

Posted 1 week ago

Apply

0.0 - 4.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Kenvue is currently recruiting for a: Analyst, Operations & Support What we do At Kenvue, we realize the extraordinary power of everyday care. Built on over a century of heritage and rooted in science, we’re the house of iconic brands - including NEUTROGENA®, AVEENO®, TYLENOL®, LISTERINE®, JOHNSON’S® and BAND-AID® that you already know and love. Science is our passion; care is our talent. Who We Are Our global team is ~ 22,000 brilliant people with a workplace culture where every voice matters, and every contribution is appreciated. We are passionate about insights, innovation and committed to delivering the best products to our customers. With expertise and empathy, being a Kenvuer means having the power to impact millions of people every day. We put people first, care fiercely, earn trust with science and solve with courage – and have brilliant opportunities waiting for you! Join us in shaping our future–and yours. Role reports to: Sr. Manager - DTO Strategy & Operation Location: Asia Pacific, India, Karnataka, Bangalore Work Location: Hybrid What you will do About Kenvue: Kenvue is the world’s largest pure-play consumer health company by revenue. Built on more than a century of heritage, our iconic brands, including Aveeno®, Johnson’s®, Listerine®, and Neutrogena® are science-backed and recommended by healthcare professionals around the world. At Kenvue, we believe in the extraordinary power of everyday care and our teams work every day to put that power in consumers’ hands and earn a place in their hearts and homes. Kenvue is currently recruiting for: Analyst, Data Science Support This position reports to the Manager, Data Science & Digital Solutions, and is based in Bangalore, India. Role reports to: Manager, Data Science & Digital Solutions Location: Bangalore, India Travel %: 0 What you will do: As Analyst, Data Science Support, you will be responsible for supporting and maintaining data science solutions leveraged by users across Kenvue Operations. In this role, you will work with cross-functional teams to ensure that our data science products are reliable for end users to support critical business operations. Key Responsibilities: Develop, refine and review mathematical models to represent supply chain systems, including inventory management, production planning, transportation logistics, and distribution networks. Collaborate with data scientists, analysts, and other stakeholders to understand recurring issues and provide them with solutions. Use tools such as Tableau, Power BI, and other visualization software to create insightful and easy-to-understand reports. Work closely with Data Scientists in implementing ML models, build data pipelines. Identify development needs for the purpose of streamlining and improving operations Support Design, development, and optimize data pipelines to extract, transform, and load (ETL) data from various sources. Use Python, SQL for data analysis and data extraction. Support team with Adhoc activities in sharing team performance, KPIs with leadership and weekly meetings. What We Are Looking For Required Qualifications: Minimum of a bachelor's degree is required. Specialization in supply chain with experience in digital product development, data science or advanced analytics fields is strongly preferred. Minimum of 2-4 years of business experience. Proficiency in SQL and Python Expertise in data visualization tools, especially Power BI Experience with ETL tools and processes, especially Alteryx Strong knowledge of data warehousing solutions Data Science experience is required. Familiarity with Microsoft technology stack (e.g., Azure, Databricks ) Experience with PowerApps and Power Automate Ability to work with both technical and non-technical team members. Excellent communication skills, both verbal and written, with the ability to convey complex information clearly Strong problem-solving and analytical abilities. Ability to work effectively in a fast-paced and dynamic environment. Proven ability to manage multiple projects simultaneously. What’s in it for you: Competitive Total Rewards Package* Paid Company Holidays, Paid Vacation, Volunteer Time & More! Learning & Development Opportunities Employee Resource Groups This list could vary based on location/region Note: Total Rewards at Kenvue include salary, bonus (if applicable) and benefits. Your Talent Access Partner will be able to share more about our total rewards offerings and the specific salary range for the relevant location(s) during the recruitment & hiring process. Kenvue is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment based on business needs, job requirements, and individual qualifications, without regard to race, color, religion, sex, sexual orientation, gender identity, age, national origin, protected veteran status, or any other legally protected characteristic, and will not be discriminated against on the basis of disability. If you are an individual with a disability, please check our Disability Assistance page for information on how to request an accommodation.

Posted 1 week ago

Apply

0.0 years

0 Lacs

Hyderabad, Telangana

Remote

Software Engineer II Hyderabad, Telangana, India Date posted Jul 31, 2025 Job number 1830824 Work site Up to 50% work from home Travel None Role type Individual Contributor Profession Software Engineering Discipline Software Engineering Employment type Full-Time Overview Microsoft’s Azure Data engineering team is leading the transformation of analytics in the world of data with products like databases, data integration, big data analytics, messaging & real-time analytics, and business intelligence. The products our portfolio include Microsoft Fabric, Azure SQL DB, Azure Cosmos DB, Azure PostgreSQL, Azure Data Factory, Azure Synapse Analytics, Azure Service Bus, Azure Event Grid, and Power BI. Our mission is to build the data platform for the age of AI, powering a new class of data-first applications and driving a data culture. Within Azure Data, the data integration team builds data gravity on the Microsoft Cloud. Massive volumes of data are generated – not just from transactional systems of record, but also from the world around us. Our data integration products – Azure Data Factory and Power Query make it easy for customers to bring in, clean, shape, and join data, to extract intelligence. We do not just value differences or different perspectives. We seek them out and invite them in so we can tap into the collective power of everyone in the company. As a result, our customers are better served. Qualifications Required /Minimum Qualifications Bachelor's Degree in Computer Science, or related technical discipline AND 4+ years technical engineering experience with coding in languages like C#, React, Redux, TypeScript, JavaScript, Java or Python OR equivalent experience Experience in data integration or data migrations or ELT or ETL tooling is mandatory Other Requirements Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter. Equal Opportunity Employer (EOP) #azdat #azuredata #azdat #azuredata #microsoftfabric #dataintegration Responsibilities Build cloud scale products with focus on efficiency, reliability and security Build and maintain end-to-end Build, Test and Deployment pipelines Deploy and manage massive Hadoop, Spark and other clusters Contribute to the architecture & design of the products Triaging issues and implementing solutions to restore service with minimal disruption to the customer and business. Perform root cause analysis, trend analysis and post-mortems Owning the components and driving them end to end, all the way from gathering requirements, development, testing, deployment to ensuring high quality and availability post deployment Embody our culture and values Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work.  Industry leading healthcare  Educational resources  Discounts on products and services  Savings and investments  Maternity and paternity leave  Generous time away  Giving programs  Opportunities to network and connect Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

Pune, Maharashtra

On-site

DataPune Posted On 31 Jul 2025 End Date 31 Dec 2025 Required Experience 4 - 8 Years Basic Section Grade Role Senior Software Engineer Employment Type Full Time Employee Category Organisational Group Company NewVision Company Name New Vision Softcom & Consultancy Pvt. Ltd Function Business Units (BU) Department/Practice Data Department/Practice Data Engineering Region APAC Country India Base Office Location Pune Working Model Hybrid Weekly Off Pune Office Standard State Maharashtra Skills Skill AZURE DATABRICKS Highest Education GRADUATION/EQUIVALENT COURSE CERTIFICATION DP-201: DESIGNING AN AZURE DATA SOLUTION DP-203T00: DATA ENGINEERING ON MICROSOFT AZURE Working Language ENGLISH Job Description Position Summary: We are seeking a talented Databricks Data Engineer with a strong background in data engineering to join our team. You will play a key role in designing, building, and maintaining data pipelines using a variety of technologies, with a focus on the Microsoft Azure cloud platform. Responsibilities: Design, develop, and implement data pipelines using Azure Data Factory (ADF) or other orchestration tools. Write efficient SQL queries to extract, transform, and load (ETL) data from various sources into Azure Synapse Analytics. Utilize PySpark and Python for complex data processing tasks on large datasets within Azure Databricks. Collaborate with data analysts to understand data requirements and ensure data quality. Hands-on experience in designing and developing Datalakes and Warehouses Implement data governance practices to ensure data security and compliance. Monitor and maintain data pipelines for optimal performance and troubleshoot any issues. Develop and maintain unit tests for data pipeline code. Work collaboratively with other engineers and data professionals in an Agile development environment. Preferred Skills & Experience: Good knowledge of PySpark & working knowledge of Python Full stack Azure Data Engineering skills (Azure Data Factory, DataBricks and Synapse Analytics) Experience with large dataset handling Hands-on experience in designing and developing Datalakes and Warehouses New Vision is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

ghaziabad, uttar pradesh

On-site

Responsibilities: Understand the requirements and based on that write/execute test scenarios to check data integrity. Understand the Data to come up with Negative scenarios and other edge cases. Adhere to team priorities and work well in an integrated developer/tester environment. Work closely with business and quality analysts and clients in a highly collaborative manner. Participate in backlog discussions, amigos meetings, and estimation sessions. Participate in a range of functional and cross-functional testing, like exploratory testing thinking outside the test plans. Report and manage defects using a defect-tracking tool. Report and communicate testing status. Must be highly motivated, result-oriented, and possess the ability to handle multiple projects with multiple deadlines concurrently with minimal supervision. Must haves: Good understanding of Agile process. Expertise of testing processes, like, test estimation, test cases creation/execution, defect reporting etc. Strong & Advance working knowledge of SQL queries. Strong & Advance working knowledge of Data warehousing Concepts. Strong & Advance working of ETL and Reporting concepts and tools, with at least 6 months hands-on experience on ETL and Reporting tools. Good communication skills & client-handling experience. Apply now! Qualification: B. Tech / BCA / MCA (Computer Science) Location: Artha SEZ, Greater Noida West Experience: 6-8 Years To apply, kindly share the CV to riyanshi@etelligens.in,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

kochi, kerala

On-site

The ideal candidate for the role of Data Architect should have at least 8+ years of experience in Modern Data Architecture, RDBMS, ETL, NoSQL, Data warehousing, Data Governance, Data Modeling, and Performance Optimization, along with proficiency in Azure/AWS/GCP. Primary skills include defining architecture & end-to-end development of Database/ETL/Data Governance processes. It is essential for the candidate to possess technical leadership skills and provide mentorship to junior team members. The candidate must have hands-on experience in 3 to 4 end-to-end projects involving Modern Data Architecture and Data Governance. Responsibilities include defining the architecture for Data engineering projects and Data Governance systems, designing, developing, and supporting Data Integration applications using Azure/AWS/GCP Cloud platforms, and implementing performance optimization techniques. Proficiency in advanced SQL and experience in modeling/designing transactional and DWH databases is required. Adherence to ISMS policies and procedures is mandatory. Good to have skills include Python, Pyspark, and Power BI. The candidate is expected to onboard by 15/01/2025 and possess a Bachelor's Degree qualification. The role entails ensuring the performance of all duties in accordance with the company's policies and procedures.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

delhi

On-site

The client, a leading MNC, specializes in technology consulting and digital solutions for global enterprises. With a vast workforce of over 145,000 professionals across 90+ countries, they cater to 1100+ clients in various industries. The company offers a comprehensive range of services including consulting, IT solutions, enterprise applications, business processes, engineering, network services, customer experience, AI & analytics, and cloud infrastructure services. Notably, they have been recognized for their commitment to sustainability with the Terra Carta Seal, showcasing their dedication to building a climate and nature-positive future. As a Data Engineer with a minimum of 6 years of experience, you will be responsible for constructing and managing data pipelines. The ideal candidate should possess expertise in Databricks, AWS/Azure, and data storage technologies such as databases and distributed file systems. Familiarity with the Spark framework is essential, and prior experience in the retail sector would be advantageous. Key Responsibilities: - Design, develop, and maintain scalable ETL pipelines for processing large data volumes from diverse sources. - Implement and oversee data integration solutions utilizing tools like Databricks, Snowflake, and other relevant technologies. - Develop and optimize data models and schemas to support analytical and reporting requirements. - Write efficient and sustainable Python code for data processing and transformations. - Utilize Apache Spark for distributed data processing and large-scale analytics. - Translate business needs into technical solutions. - Ensure data quality and integrity through rigorous unit testing. - Collaborate with cross-functional teams to integrate data pipelines with other systems. Technical Requirements: - Proficiency in Databricks for data integration and processing. - Experience with ETL tools and processes. - Strong Python programming skills with Apache Spark, emphasizing data processing and automation. - Solid SQL skills and familiarity with relational databases. - Understanding of data warehousing concepts and best practices. - Exposure to cloud platforms such as AWS and Azure. - Hands-on troubleshooting ability and problem-solving skills for complex data issues. - Practical experience with Snowflake.,

Posted 1 week ago

Apply

2.0 - 10.0 years

0 Lacs

pune, maharashtra

On-site

As a Principal Data Engineer at Brillio, you will play a key role in leveraging your expertise in Data Modeling, particularly with tools like ER Studio and ER Win. Brillio, known for its digital technology services and partnership with Fortune 1000 companies, is committed to transforming disruptions into competitive advantages through innovative digital solutions. With over 10 years of IT experience and at least 2 years of hands-on experience in Snowflake, you will be responsible for building and maintaining data models that support the organization's Data Lake/ODS, ETL processes, and data warehousing needs. Your ability to collaborate closely with clients to deliver physical and logical model solutions will be critical to the success of various projects. In this role, you will demonstrate your expertise in Data Modeling concepts at an advanced level, with a focus on modeling in large volume-based environments. Your experience with tools like ER Studio and your overall understanding of database technologies, data warehouses, and analytics will be essential in designing and implementing effective data models. Additionally, your strong skills in Entity Relationship Modeling, knowledge of database design and administration, and proficiency in SQL query development will enable you to contribute to the design and optimization of data structures, including Star Schema design. Your leadership abilities and excellent communication skills will be instrumental in leading teams and ensuring the successful implementation of data modeling solutions. While experience with AWS ecosystems is a plus, your dedication to staying at the forefront of technological advancements and your passion for delivering exceptional client experiences will make you a valuable addition to Brillio's team of "Brillians." Join us in our mission to create innovative digital solutions and make a difference in the world of technology.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

As a Talend Developer with expertise in SQL (SSQL), you will be an integral part of our banking client's data integration team. Your primary responsibilities will include designing, developing, and optimizing ETL workflows using Talend as well as crafting complex SQL queries for data transformation and reporting purposes. To excel in this role, it is crucial to have experience working with financial data systems and possessing the ability to fine-tune performance for optimal results. Additionally, strong analytical capabilities and effective communication skills are essential for successful collaboration within the team and with stakeholders. Please note that the selection process involves a single in-person interview round, which will be conducted in Hyderabad.,

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies