Jobs
Interviews

389 Aggregations Jobs - Page 4

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 years

8 - 18 Lacs

chennai, tamil nadu, india

On-site

Industry & Sector: Recruitment & staffing for technology and analytics roles supporting Financial Services, Retail and Enterprise Data platforms. We are hiring on behalf of clients for an on-site data engineering QA role focused on validating ETL pipelines, data quality, and production-ready data warehouse solutions. Primary Job Title: ETL QA Engineer Location: India — On-site Role & Responsibilities Execute end-to-end ETL test cycles: validate source-to-target mappings, transformations, row counts, and data reconciliation for batch and incremental loads. Create and maintain detailed test plans, test cases, and traceability matrices from functional and technical specifications. Author and run complex SQL/PLSQL queries to perform record-level validation, aggregate checks, and anomaly detection; capture and report metrics. Automate repetitive validation tasks using scripts (Python/Shell) or ETL tool features and integrate checks into CI/CD pipelines where applicable. Log, triage and manage defects in JIRA/ALM; reproduce issues, collaborate with ETL developers to resolve root causes, and validate fixes through regression testing. Participate in requirement and design reviews to improve testability, support production cutovers, and execute post-deployment validations. Skills & Qualifications Must-Have 2+ years of hands-on ETL / Data Warehouse testing experience (source-to-target testing, reconciliation). Strong SQL skills (complex joins, aggregations, window functions) and experience writing validation queries for large datasets. Hands-on experience with at least one ETL tool—Informatica PowerCenter, Microsoft SSIS or Talend. Solid understanding of Data Warehouse concepts: star/snowflake schemas, SCDs, fact and dimension tables, partitions. Experience with test management and defect-tracking tools (JIRA, HP ALM) and basic Unix/Linux command-line proficiency. Good analytical thinking, attention to detail, and effective verbal/written communication for on-site client collaboration. Preferred Experience automating data validation using Python or shell scripts and integrating checks into CI/CD workflows. Familiarity with cloud data platforms (Snowflake, Redshift, BigQuery, Azure Synapse) and ETL scheduling tools (Control-M, Autosys). Exposure to performance testing for ETL jobs and experience working in Agile delivery teams. Benefits & Culture Highlights Exposure to large-scale data warehouse projects across banking, retail and enterprise clients—fast skill growth. Collaborative, client-facing environment with structured onboarding and emphasis on practical, hands-on learning. On-site role enabling close collaboration with cross-functional teams and direct impact on production readiness. To apply: Submit an updated CV highlighting ETL testing experience, sample SQL queries or automation snippets (if available), and your immediate availability. Competitive opportunities for candidates who demonstrate strong technical verification skills and attention to data quality. Skills: etl testing,big data,data warehouse testing

Posted 3 weeks ago

Apply

2.0 years

8 - 18 Lacs

bengaluru, karnataka, india

On-site

Industry & Sector: Recruitment & staffing for technology and analytics roles supporting Financial Services, Retail and Enterprise Data platforms. We are hiring on behalf of clients for an on-site data engineering QA role focused on validating ETL pipelines, data quality, and production-ready data warehouse solutions. Primary Job Title: ETL QA Engineer Location: India — On-site Role & Responsibilities Execute end-to-end ETL test cycles: validate source-to-target mappings, transformations, row counts, and data reconciliation for batch and incremental loads. Create and maintain detailed test plans, test cases, and traceability matrices from functional and technical specifications. Author and run complex SQL/PLSQL queries to perform record-level validation, aggregate checks, and anomaly detection; capture and report metrics. Automate repetitive validation tasks using scripts (Python/Shell) or ETL tool features and integrate checks into CI/CD pipelines where applicable. Log, triage and manage defects in JIRA/ALM; reproduce issues, collaborate with ETL developers to resolve root causes, and validate fixes through regression testing. Participate in requirement and design reviews to improve testability, support production cutovers, and execute post-deployment validations. Skills & Qualifications Must-Have 2+ years of hands-on ETL / Data Warehouse testing experience (source-to-target testing, reconciliation). Strong SQL skills (complex joins, aggregations, window functions) and experience writing validation queries for large datasets. Hands-on experience with at least one ETL tool—Informatica PowerCenter, Microsoft SSIS or Talend. Solid understanding of Data Warehouse concepts: star/snowflake schemas, SCDs, fact and dimension tables, partitions. Experience with test management and defect-tracking tools (JIRA, HP ALM) and basic Unix/Linux command-line proficiency. Good analytical thinking, attention to detail, and effective verbal/written communication for on-site client collaboration. Preferred Experience automating data validation using Python or shell scripts and integrating checks into CI/CD workflows. Familiarity with cloud data platforms (Snowflake, Redshift, BigQuery, Azure Synapse) and ETL scheduling tools (Control-M, Autosys). Exposure to performance testing for ETL jobs and experience working in Agile delivery teams. Benefits & Culture Highlights Exposure to large-scale data warehouse projects across banking, retail and enterprise clients—fast skill growth. Collaborative, client-facing environment with structured onboarding and emphasis on practical, hands-on learning. On-site role enabling close collaboration with cross-functional teams and direct impact on production readiness. To apply: Submit an updated CV highlighting ETL testing experience, sample SQL queries or automation snippets (if available), and your immediate availability. Competitive opportunities for candidates who demonstrate strong technical verification skills and attention to data quality. Skills: etl testing,big data,data warehouse testing

Posted 3 weeks ago

Apply

2.0 years

8 - 18 Lacs

hyderabad, telangana, india

On-site

Industry & Sector: Recruitment & staffing for technology and analytics roles supporting Financial Services, Retail and Enterprise Data platforms. We are hiring on behalf of clients for an on-site data engineering QA role focused on validating ETL pipelines, data quality, and production-ready data warehouse solutions. Primary Job Title: ETL QA Engineer Location: India — On-site Role & Responsibilities Execute end-to-end ETL test cycles: validate source-to-target mappings, transformations, row counts, and data reconciliation for batch and incremental loads. Create and maintain detailed test plans, test cases, and traceability matrices from functional and technical specifications. Author and run complex SQL/PLSQL queries to perform record-level validation, aggregate checks, and anomaly detection; capture and report metrics. Automate repetitive validation tasks using scripts (Python/Shell) or ETL tool features and integrate checks into CI/CD pipelines where applicable. Log, triage and manage defects in JIRA/ALM; reproduce issues, collaborate with ETL developers to resolve root causes, and validate fixes through regression testing. Participate in requirement and design reviews to improve testability, support production cutovers, and execute post-deployment validations. Skills & Qualifications Must-Have 2+ years of hands-on ETL / Data Warehouse testing experience (source-to-target testing, reconciliation). Strong SQL skills (complex joins, aggregations, window functions) and experience writing validation queries for large datasets. Hands-on experience with at least one ETL tool—Informatica PowerCenter, Microsoft SSIS or Talend. Solid understanding of Data Warehouse concepts: star/snowflake schemas, SCDs, fact and dimension tables, partitions. Experience with test management and defect-tracking tools (JIRA, HP ALM) and basic Unix/Linux command-line proficiency. Good analytical thinking, attention to detail, and effective verbal/written communication for on-site client collaboration. Preferred Experience automating data validation using Python or shell scripts and integrating checks into CI/CD workflows. Familiarity with cloud data platforms (Snowflake, Redshift, BigQuery, Azure Synapse) and ETL scheduling tools (Control-M, Autosys). Exposure to performance testing for ETL jobs and experience working in Agile delivery teams. Benefits & Culture Highlights Exposure to large-scale data warehouse projects across banking, retail and enterprise clients—fast skill growth. Collaborative, client-facing environment with structured onboarding and emphasis on practical, hands-on learning. On-site role enabling close collaboration with cross-functional teams and direct impact on production readiness. To apply: Submit an updated CV highlighting ETL testing experience, sample SQL queries or automation snippets (if available), and your immediate availability. Competitive opportunities for candidates who demonstrate strong technical verification skills and attention to data quality. Skills: etl testing,big data,data warehouse testing

Posted 3 weeks ago

Apply

2.0 years

8 - 18 Lacs

delhi, india

On-site

Industry & Sector: Recruitment & staffing for technology and analytics roles supporting Financial Services, Retail and Enterprise Data platforms. We are hiring on behalf of clients for an on-site data engineering QA role focused on validating ETL pipelines, data quality, and production-ready data warehouse solutions. Primary Job Title: ETL QA Engineer Location: India — On-site Role & Responsibilities Execute end-to-end ETL test cycles: validate source-to-target mappings, transformations, row counts, and data reconciliation for batch and incremental loads. Create and maintain detailed test plans, test cases, and traceability matrices from functional and technical specifications. Author and run complex SQL/PLSQL queries to perform record-level validation, aggregate checks, and anomaly detection; capture and report metrics. Automate repetitive validation tasks using scripts (Python/Shell) or ETL tool features and integrate checks into CI/CD pipelines where applicable. Log, triage and manage defects in JIRA/ALM; reproduce issues, collaborate with ETL developers to resolve root causes, and validate fixes through regression testing. Participate in requirement and design reviews to improve testability, support production cutovers, and execute post-deployment validations. Skills & Qualifications Must-Have 2+ years of hands-on ETL / Data Warehouse testing experience (source-to-target testing, reconciliation). Strong SQL skills (complex joins, aggregations, window functions) and experience writing validation queries for large datasets. Hands-on experience with at least one ETL tool—Informatica PowerCenter, Microsoft SSIS or Talend. Solid understanding of Data Warehouse concepts: star/snowflake schemas, SCDs, fact and dimension tables, partitions. Experience with test management and defect-tracking tools (JIRA, HP ALM) and basic Unix/Linux command-line proficiency. Good analytical thinking, attention to detail, and effective verbal/written communication for on-site client collaboration. Preferred Experience automating data validation using Python or shell scripts and integrating checks into CI/CD workflows. Familiarity with cloud data platforms (Snowflake, Redshift, BigQuery, Azure Synapse) and ETL scheduling tools (Control-M, Autosys). Exposure to performance testing for ETL jobs and experience working in Agile delivery teams. Benefits & Culture Highlights Exposure to large-scale data warehouse projects across banking, retail and enterprise clients—fast skill growth. Collaborative, client-facing environment with structured onboarding and emphasis on practical, hands-on learning. On-site role enabling close collaboration with cross-functional teams and direct impact on production readiness. To apply: Submit an updated CV highlighting ETL testing experience, sample SQL queries or automation snippets (if available), and your immediate availability. Competitive opportunities for candidates who demonstrate strong technical verification skills and attention to data quality. Skills: etl testing,big data,data warehouse testing

Posted 3 weeks ago

Apply

2.0 years

8 - 18 Lacs

gurugram, haryana, india

On-site

Industry & Sector: Recruitment & staffing for technology and analytics roles supporting Financial Services, Retail and Enterprise Data platforms. We are hiring on behalf of clients for an on-site data engineering QA role focused on validating ETL pipelines, data quality, and production-ready data warehouse solutions. Primary Job Title: ETL QA Engineer Location: India — On-site Role & Responsibilities Execute end-to-end ETL test cycles: validate source-to-target mappings, transformations, row counts, and data reconciliation for batch and incremental loads. Create and maintain detailed test plans, test cases, and traceability matrices from functional and technical specifications. Author and run complex SQL/PLSQL queries to perform record-level validation, aggregate checks, and anomaly detection; capture and report metrics. Automate repetitive validation tasks using scripts (Python/Shell) or ETL tool features and integrate checks into CI/CD pipelines where applicable. Log, triage and manage defects in JIRA/ALM; reproduce issues, collaborate with ETL developers to resolve root causes, and validate fixes through regression testing. Participate in requirement and design reviews to improve testability, support production cutovers, and execute post-deployment validations. Skills & Qualifications Must-Have 2+ years of hands-on ETL / Data Warehouse testing experience (source-to-target testing, reconciliation). Strong SQL skills (complex joins, aggregations, window functions) and experience writing validation queries for large datasets. Hands-on experience with at least one ETL tool—Informatica PowerCenter, Microsoft SSIS or Talend. Solid understanding of Data Warehouse concepts: star/snowflake schemas, SCDs, fact and dimension tables, partitions. Experience with test management and defect-tracking tools (JIRA, HP ALM) and basic Unix/Linux command-line proficiency. Good analytical thinking, attention to detail, and effective verbal/written communication for on-site client collaboration. Preferred Experience automating data validation using Python or shell scripts and integrating checks into CI/CD workflows. Familiarity with cloud data platforms (Snowflake, Redshift, BigQuery, Azure Synapse) and ETL scheduling tools (Control-M, Autosys). Exposure to performance testing for ETL jobs and experience working in Agile delivery teams. Benefits & Culture Highlights Exposure to large-scale data warehouse projects across banking, retail and enterprise clients—fast skill growth. Collaborative, client-facing environment with structured onboarding and emphasis on practical, hands-on learning. On-site role enabling close collaboration with cross-functional teams and direct impact on production readiness. To apply: Submit an updated CV highlighting ETL testing experience, sample SQL queries or automation snippets (if available), and your immediate availability. Competitive opportunities for candidates who demonstrate strong technical verification skills and attention to data quality. Skills: etl testing,big data,data warehouse testing

Posted 3 weeks ago

Apply

2.0 years

8 - 18 Lacs

pune, maharashtra, india

On-site

Industry & Sector: Recruitment & staffing for technology and analytics roles supporting Financial Services, Retail and Enterprise Data platforms. We are hiring on behalf of clients for an on-site data engineering QA role focused on validating ETL pipelines, data quality, and production-ready data warehouse solutions. Primary Job Title: ETL QA Engineer Location: India — On-site Role & Responsibilities Execute end-to-end ETL test cycles: validate source-to-target mappings, transformations, row counts, and data reconciliation for batch and incremental loads. Create and maintain detailed test plans, test cases, and traceability matrices from functional and technical specifications. Author and run complex SQL/PLSQL queries to perform record-level validation, aggregate checks, and anomaly detection; capture and report metrics. Automate repetitive validation tasks using scripts (Python/Shell) or ETL tool features and integrate checks into CI/CD pipelines where applicable. Log, triage and manage defects in JIRA/ALM; reproduce issues, collaborate with ETL developers to resolve root causes, and validate fixes through regression testing. Participate in requirement and design reviews to improve testability, support production cutovers, and execute post-deployment validations. Skills & Qualifications Must-Have 2+ years of hands-on ETL / Data Warehouse testing experience (source-to-target testing, reconciliation). Strong SQL skills (complex joins, aggregations, window functions) and experience writing validation queries for large datasets. Hands-on experience with at least one ETL tool—Informatica PowerCenter, Microsoft SSIS or Talend. Solid understanding of Data Warehouse concepts: star/snowflake schemas, SCDs, fact and dimension tables, partitions. Experience with test management and defect-tracking tools (JIRA, HP ALM) and basic Unix/Linux command-line proficiency. Good analytical thinking, attention to detail, and effective verbal/written communication for on-site client collaboration. Preferred Experience automating data validation using Python or shell scripts and integrating checks into CI/CD workflows. Familiarity with cloud data platforms (Snowflake, Redshift, BigQuery, Azure Synapse) and ETL scheduling tools (Control-M, Autosys). Exposure to performance testing for ETL jobs and experience working in Agile delivery teams. Benefits & Culture Highlights Exposure to large-scale data warehouse projects across banking, retail and enterprise clients—fast skill growth. Collaborative, client-facing environment with structured onboarding and emphasis on practical, hands-on learning. On-site role enabling close collaboration with cross-functional teams and direct impact on production readiness. To apply: Submit an updated CV highlighting ETL testing experience, sample SQL queries or automation snippets (if available), and your immediate availability. Competitive opportunities for candidates who demonstrate strong technical verification skills and attention to data quality. Skills: etl testing,big data,data warehouse testing

Posted 3 weeks ago

Apply

2.0 years

8 - 18 Lacs

noida, uttar pradesh, india

On-site

Industry & Sector: Recruitment & staffing for technology and analytics roles supporting Financial Services, Retail and Enterprise Data platforms. We are hiring on behalf of clients for an on-site data engineering QA role focused on validating ETL pipelines, data quality, and production-ready data warehouse solutions. Primary Job Title: ETL QA Engineer Location: India — On-site Role & Responsibilities Execute end-to-end ETL test cycles: validate source-to-target mappings, transformations, row counts, and data reconciliation for batch and incremental loads. Create and maintain detailed test plans, test cases, and traceability matrices from functional and technical specifications. Author and run complex SQL/PLSQL queries to perform record-level validation, aggregate checks, and anomaly detection; capture and report metrics. Automate repetitive validation tasks using scripts (Python/Shell) or ETL tool features and integrate checks into CI/CD pipelines where applicable. Log, triage and manage defects in JIRA/ALM; reproduce issues, collaborate with ETL developers to resolve root causes, and validate fixes through regression testing. Participate in requirement and design reviews to improve testability, support production cutovers, and execute post-deployment validations. Skills & Qualifications Must-Have 2+ years of hands-on ETL / Data Warehouse testing experience (source-to-target testing, reconciliation). Strong SQL skills (complex joins, aggregations, window functions) and experience writing validation queries for large datasets. Hands-on experience with at least one ETL tool—Informatica PowerCenter, Microsoft SSIS or Talend. Solid understanding of Data Warehouse concepts: star/snowflake schemas, SCDs, fact and dimension tables, partitions. Experience with test management and defect-tracking tools (JIRA, HP ALM) and basic Unix/Linux command-line proficiency. Good analytical thinking, attention to detail, and effective verbal/written communication for on-site client collaboration. Preferred Experience automating data validation using Python or shell scripts and integrating checks into CI/CD workflows. Familiarity with cloud data platforms (Snowflake, Redshift, BigQuery, Azure Synapse) and ETL scheduling tools (Control-M, Autosys). Exposure to performance testing for ETL jobs and experience working in Agile delivery teams. Benefits & Culture Highlights Exposure to large-scale data warehouse projects across banking, retail and enterprise clients—fast skill growth. Collaborative, client-facing environment with structured onboarding and emphasis on practical, hands-on learning. On-site role enabling close collaboration with cross-functional teams and direct impact on production readiness. To apply: Submit an updated CV highlighting ETL testing experience, sample SQL queries or automation snippets (if available), and your immediate availability. Competitive opportunities for candidates who demonstrate strong technical verification skills and attention to data quality. Skills: etl testing,big data,data warehouse testing

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

coimbatore, tamil nadu, india

On-site

Company Description threeS Data, a cutting-edge technology startup based in Coimbatore, India, specializes in Data Architecture, Management, Governance, Analytics, Intelligence, Business Intelligence, Automation, and Machine Learning. Founded in 2024, we focus on delivering simple, smart, and significant solutions that meet our clients' desired outcomes. Our engagements are partnerships, dedicated to understanding the complexities of day-to-day operations and offering practical, honest approaches to deliver exceptional results. Role Description This is a contract role based in Coimbatore, ideal for professionals who can independently deliver high-quality ETL solutions in a cloud-native, fast-paced environment. The position is hybrid, based in Coimbatore, with some work-from-home flexibility. Day-to-day tasks include designing, developing, and maintaining data pipelines, performing data modeling, implementing ETL processes, and managing data warehousing solutions. We are looking for candidates (6+years of experience) expertise in Apache Airflow , Redshift , and SQ based data pipelines, with upcoming transitions to Snowflake . Key Responsibilities: ETL Design and Development: · Design and develop scalable and modular ETL pipelines using Apache Airflow , with orchestration and monitoring capabilities. · Translate business requirements into robust data transformation pipelines across cloud-data platforms. · Develop reusable ETL components to support a configuration-driven architecture. Data Integration and Transformation: · Integrate data from multiple sources: Redshift, flat files, APIs, Excel, and relational databases. · Implement transformation logic such as cleansing, standardization, enrichment, and de-duplication. · Manage incremental and full loads, along with SCD handling strategies. SQL and Database Development: · Write performant SQL queries for data staging and transformation within Redshift and Snowflake. · Utilize joins, window functions, and aggregations effectively. · Ensure indexing and query tuning for high-performance workloads. Performance Tuning: · Optimize data pipelines and orchestrations for large-scale data volumes. · Tune SQL queries and monitor execution plans. · Implement best practices in distributed data processing and cloud-native optimizations. Error Handling and Logging: · Implement robust error handling and logging in Airflow DAGs. · Enable retry logic, alerting mechanisms, and failure notifications. Testing and Quality Assurance: · Conduct unit and integration testing of ETL jobs. · Validate data outputs against business rules and source systems. · Support QA during UAT cycles and help resolve data defects. Deployment and Scheduling: · Deploy pipelines using Git-based CI/CD practices. · Schedule and monitor DAGs using Apache Airflow and integrated tools. · Troubleshoot failures and ensure data pipeline reliability. Documentation and Maintenance: · Document data flows, DAG configurations, transformation logic, and operational procedures. · Maintain change logs and update job dependency charts. Collaboration and Communication: · Work closely with data architects, analysts, and BI teams to define and fulfill data needs. · Participate in stand-ups, sprint planning, and post-deployment reviews. Compliance and Best Practices: · Ensure ETL processes adhere to data security, governance, and privacy regulations (HIPAA, GDPR, etc.). · Follow naming conventions, version control standards, and deployment protocols. Qualifications o 6+ years of hands-on experience in ETL development. o Proven experience with Apache Airflow , Amazon Redshift , and strong SQL. o Strong understanding of data warehousing concepts and cloud-based data ecosystems. o Familiarity with handling flat files, APIs, and external sources o Experience with job orchestration, error handling, and scalable transformation patterns. o Ability to work independently and meet deadlines. Preferred Skills: § Exposure to Snowflake or plans to migrate to Snowflake platforms. § Experience in healthcare , life sciences , or regulated environments is a plus. § Familiarity with Azure Data Factory , Power BI , or other cloud BI tools. § Knowledge of Git, Azure DevOps, or other version control and CI/CD platforms.

Posted 3 weeks ago

Apply

0 years

0 Lacs

gurugram, haryana, india

On-site

Key Responsibilities Data Validation Write SQL queries to check source vs. target data. Validate CRUD operations (Create, Read, Update, Delete). Ensure data completeness & correctness after migrations. ETL Testing Validate transformations, aggregations, joins, and filters. Check row counts, duplicate data, and null handling. Test incremental loads (daily, weekly). Database Testing Verify indexes, constraints, triggers, stored procedures. Check performance of queries (execution plans, indexing). Validate transactions & rollback scenarios. Automation Use Python/Java + PyTest/JUnit for automated SQL validations. Integrate with CI/CD (Jenkins, GitHub Actions). 🔹 Required Skills SQL (Core skill) Joins, Subqueries, Window functions, CTEs. Database Systems Oracle, MySQL, PostgreSQL, SQL Server, Teradata, Snowflake, Redshift. ETL & Data Warehousing Concepts Fact & dimension validation. Slowly Changing Dimensions (SCDs). Testing Tools JIRA, HP ALM, Selenium (for UI + backend testing). Programming Basics Python / Shell scripting for automation.

Posted 3 weeks ago

Apply

0 years

0 Lacs

india

Remote

🔍 Real-Time Projects | Remote | Performance-Based Stipend 📅 Application Deadline: 1st September 2025 Are you eager to dig into databases, write meaningful SQL queries, and extract insights that drive decisions? We’re offering a remote internship for aspiring data professionals to work on real-time, industry-level capstone projects that build both skill and portfolio. 🧠 Role: Database Insights Intern Location: Remote Duration: Flexible (minimum commitment required) Stipend: Performance-Based (Top performers are rewarded) Start Date: Rolling basis Deadline to Apply: 1st September 2025 🔧 What You’ll Be Doing: Work on real-world datasets from various industries Write and optimize SQL queries to extract, clean, and transform data Analyze data and generate insights to support business decisions Build basic dashboards and reports to communicate findings Collaborate with mentors and peers in a remote team environment ✅ What You Need: Basic to intermediate SQL skills (joins, subqueries, aggregations, etc.) Understanding of databases and data types Interest in analytics, business intelligence, and storytelling with data Familiarity with Excel, Power BI, or Tableau is a plus Self-motivation, curiosity, and a problem-solving mindset 🎁 What You’ll Gain: Hands-on experience with real-time capstone projects Mentorship and guidance from industry professionals Flexible schedule to work at your own pace Performance-based stipend and rewards for top performers Internship certificate and letter of recommendation for high achievers A strong portfolio to showcase your skills to future employers 📩 How to Apply: Deadline to Apply: 1st September 2025

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

pune, maharashtra, india

Remote

At MARMA.AI, we’re building the next generation of analysts. Our AI-powered platform transforms the way people learn analytics by providing real-world business challenges, instant feedback, personalized learning paths, and dynamic industry datasets. Our mission is to bridge the gap between theory and execution and help aspiring professionals thrive in the AI era. 💡 About the Role We are looking for a Data Analyst with hands-on experience in SQL and Python . In this role, you’ll work with real-world datasets, uncover actionable insights, and help design analytical challenges that power our interactive learning platform. 🌍 Who Should Apply Recent graduates or early-career professionals with 0–2 years of experience Hands-on skills in SQL (data extraction, joins, aggregations) and Python (data wrangling, analysis, visualization) Prior experience in e-commerce data analytics is a plus, but not required Strong curiosity, problem-solving skills, and passion for data-driven decision making 📌 What You’ll Do Use SQL and Python to extract, clean, and analyze large datasets Generate insights that guide both internal product decisions and external learning content Contribute to the creation of industry-oriented case studies and datasets for learners Collaborate with cross-functional teams to design data-driven solutions Continuously explore new tools and methods in analytics and AI ✨ Why Join Us? Work on real-world business problems across industries Be part of an AI-powered learning platform shaping the future of data analytics Grow in a collaborative, innovation-driven environment Opportunity to make an impact by helping build a global community of data-literate professionals 🔎 Location : Remote 📅 Experience : 0–2 years 💼 Skills : SQL, Python, Data Analysis

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Job Description Job Summary: As a Software Engineer III at JPMorgan Chase within the Corporate and Investment Bank, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives. Job Responsibilities Design, develop, and maintain scalable data pipelines and ETL processes to support data integration and analytics. Frequently utilizes SQL and understands NoSQL databases and their niche in the marketplace Collaborate closely with cross-functional teams to develop efficient data pipelines to support various data-driven initiatives Implement best practices for data engineering, ensuring data quality, reliability, and performance Contribute to data modernization efforts by leveraging cloud solutions and optimizing data processing workflows Perform data extraction and implement complex data transformation logic to meet business requirements Leverage advanced analytical skills to improve data pipelines and ensure data delivery is consistent across projects Monitor and executes data quality checks to proactively identify and address anomalies Ensure data availability and accuracy for analytical purposes. Identify opportunities for process automation within data engineering workflows Communicate technical concepts to both technical and non-technical stakeholders. Deploy and manage containerized applications using Kubernetes (EKS) and Amazon ECS. Implement data orchestration and workflow automation using AWS step , Event Bridge. Use Terraform for infrastructure provisioning and management, ensuring a robust and scalable data infrastructure. Required Qualifications, Capabilities, And Skills Formal training or certification on Data Engineering concepts and 3+ years applied experience Experience across the data lifecycle. Advanced at SQL (e.g., joins and aggregations) Advanced knowledge of RDBMS like Aurora. Experience in Microservice based component using ECS or EKS Working understanding of NoSQL databases 4 + years of Data Engineering experience in building and optimizing data pipelines, architectures, and data sets ( Glue or Databricks etl) Proficiency in object-oriented and object function scripting languages (Python etc.) Experience in developing ETL process and workflows for streaming data from heterogeneous data sources Willingness and ability to learn and pick up new skillsets Experience working with modern DataLakes: Databricks ). Experience building Pipeline on AWS using Terraform and using CI/CD piplelines Preferred qualifications, capabilities, and skills Experience with data pipeline and workflow management tools (Airflow, etc.) Strong analytical and problem-solving skills, with attention to detail. Ability to work independently and collaboratively in a team environment. Good communication skills, with the ability to convey technical concepts to non-technical stakeholders. A proactive approach to learning and adapting to new technologies and methodologies. ABOUT US JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team J.P. Morgan’s Commercial & Investment Bank is a global leader across banking, markets, securities services and payments. Corporations, governments and institutions throughout the world entrust us with their business in more than 100 countries. The Commercial & Investment Bank provides strategic advice, raises capital, manages risk and extends liquidity in markets around the world.

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

gurugram, haryana, india

On-site

Job Title: Big Data Tester (SQL) Experience: 3+Years Location: [Gurugram/Bangalore] Employment Type: Full-Time Job Summary: We are seeking a highly skilled Big Data Tester with strong SQL expertise to join our team. The ideal candidate will be responsible for validating, testing, and ensuring the quality of large-scale data pipelines, data lakes, and big data platforms. This role requires expertise in SQL, Big Data testing frameworks, ETL processes, and hands-on experience with Hadoop ecosystem tools. Key Responsibilities: Design, develop, and execute test strategies for Big Data applications and pipelines . Perform data validation, reconciliation, and quality checks on large datasets. Write and execute complex SQL queries for data validation and analysis. Validate data ingestion from multiple sources (structured/unstructured) into Hadoop/Big Data platforms. Conduct testing for ETL jobs, data transformations, and data loading processes . Work with Hadoop, Hive, Spark, Sqoop, HDFS, and related Big Data tools . Identify, document, and track defects; collaborate with developers and data engineers to resolve issues. Automate data validation and testing processes where possible. Ensure compliance with data governance, quality standards, and best practices . Work in an Agile environment with cross-functional teams. Required Skills & Qualifications: Bachelor’s degree in Computer Science, Information Technology, or related field. 3+ years of experience in Data/ETL/Big Data testing . Strong SQL skills (complex queries, joins, aggregations, window functions). Hands-on experience with Big Data tools : Hadoop, Hive, Spark, HDFS, Sqoop, Impala (any relevant). Experience in testing ETL processes and Data Warehousing solutions . Familiarity with scripting languages (Python/Unix Shell) for test automation. Knowledge of defect management and test case management tools (e.g., JIRA, HP ALM). Strong analytical and problem-solving skills. Good to Have: Exposure to Cloud platforms (AWS, Azure, GCP) with Big Data services. Experience with automation frameworks for Big Data testing. Understanding of CI/CD pipelines for data projects.

Posted 3 weeks ago

Apply

1.0 - 3.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Skill required: Property & Casualty - Catastrophe Risk Management Designation: Analytics and Modeling Associate Qualifications: Any Graduation,BE,BTech Years of Experience: 1 to 3 years About Accenture Accenture is a global professional services company with leading capabilities in digital, cloud and security.Combining unmatched experience and specialized skills across more than 40 industries, we offer Strategy and Consulting, Technology and Operations services, and Accenture Song— all powered by the world’s largest network of Advanced Technology and Intelligent Operations centers. Our 699,000 people deliver on the promise of technology and human ingenuity every day, serving clients in more than 120 countries. We embrace the power of change to create value and shared success for our clients, people, shareholders, partners and communities.Visit us at www.accenture.com What would you do? We help insurers redefine their customer experience while accelerating their innovation agenda to drive sustainable growth by transforming to an intelligent operating model. Intelligent Insurance Operations combines our advisory, technology, and operations expertise, global scale, and robust ecosystem with our insurance transformation capabilities. It is structured to address the scope and complexity of the ever-changing insurance environment and offers a flexible operating model that can meet the unique needs of each market segment.Claims settlements related any client property they own or any accidentsCatastrophe Risk Management refers to the process of guiding insurers how to manage risk aggregations, deploy capital, and price insurance coverage by using computer assisted calculations to estimate the losses that could be sustained due to a catastrophic event such as a hurricane or earthquake. What are we looking for? Problem-solving skills Agility for quick learning Strong analytical skills Collaboration and interpersonal skills Roles and Responsibilities: In this role you are required to solve routine problems, largely through precedent and referral to general guidelines Your expected interactions are within your own team and direct supervisor You will be provided detailed to moderate level of instruction on daily work tasks and detailed instruction on new assignments The decisions that you make would impact your own work You will be an individual contributor as a part of a team, with a predetermined, focused scope of work Please note that this role may require you to work in rotational shifts, Any Graduation,BE,BTech

Posted 3 weeks ago

Apply

0 years

0 Lacs

chennai, tamil nadu, india

On-site

ABAP developer will coordinate the plan, design, develop, test, and implementation of SAP programs across an enterprise-wide SAP system instance. This person must be able to work with a small team of SAP professionals. The ABAP developer must be a proactively member of project development teams and support specific best practices related the BorgWarner’s SAP template. Development objects will include reports, interfaces, conversions and enhancements ABAP developer must be able to brief project management on the status of development and any associated risk. Key Roles & Responsibilities With in-depth knowledge of general ABAP programming techniques (RICEFW - Report, Interface, Conversion, Enhancement, Form and Workflow); with programming Function Modules, Object-Oriented ABAP, User Exits, Enhancement Spots (Implicit and Explicit) and Dialog Programming Expert in data conversion using LSMW (BDC, BAPI, IDOC, Direct/Batch Input) and developing standalone programs with correct techniques for data conversions Intensive knowledge of RF development using RF framework supported by MDE configuration Module pool programming experience using custom controls (ALV/Tree/Image), OLE embedding, etc., Expertise in report programming using ALV, classical, drill down and interactive using ALV events Proficient in developing ABAP queries and quick viewers queries Experience in code optimization, performance tuning and runtime analysis Expertise in using Code Inspector, Single Transaction Analysis, SQL Performance Trace, Runtime Analysis tools Good knowledge and hands on experience of SAP interfacing technologies (ALE/IDocs, RFCs, BAPI's, ODATA, flat-file interfaces). Experience with SAP Scripts, SAP Smartforms and Adobe Forms Knowledge of label design using Nice Label/Bartender is preferred Knowledge in usage of RF guns, Zebra label printers Experience with SPAU/SPDD activities for system upgrade activities Ability to help resolve complex technical issues and independently manage critical/complex situations Perform break/fix analysis and recommend solutions Estimate development costs on associated programs Creating technical design specifications to ensure compliance with the functional teams and IT Management Advise on new technologies and keep abreast of SAP releases, enhancements/new functionality Ensure compliance with BorgWarner policies and design standards on implementation projects Nice to have skills, Preferred Skill Set – Including, but not limited to, ABAP on HANA, HANA modelling, OO ABAP, Gateway for OData service building, XML, REST Web Service Experience with SAP Fiori, Cloud Platform, OData technologies, HANA DB, SAPUI5, implementing and extending standard SAP Fiori Apps is a plus Expertise in Native HANA development and ABAP CDS views, experience in creating complex HANA views with aggregations and joins; AMDP, code push down techniques is a plus Should have expertise in Designing and Modeling OData services using the Gateway Service Builder Experience in using SOLMAN 7.2 with ChaRM and SolDoc Internal Use Only Salary Global Terms of Use and Privacy Statement Carefully read the BorgWarner Privacy Policy before using this website. Your ability to access and use this website and apply for a job at BorgWarner are conditioned on your acceptance and compliance with these terms. Please access the linked document by clicking here, select the geographical area where you are applying for employment, and review. Before submitting your application you will be asked to confirm your agreement with the terms. Career Scam Disclaimer BorgWarner makes no representations or guarantees regarding employment opportunities listed on any third-party website. To protect against career scams, job applicants should take the necessary precautions when interviewing for and accepting employment positions allegedly offered by BorgWarner. Applicants should never provide their national ID numbers, birth dates, credit card numbers, bank account information or other private information when communicating with prospective employers or responding to employment opportunities online. Job applicants are invited to contact BorgWarner through BorgWarner’s website to verify the authenticity of any employment opportunities.

Posted 3 weeks ago

Apply

0 years

0 Lacs

hyderabad, telangana, india

On-site

About Deutsche Börse Group Headquartered in Frankfurt, Germany, we are a leading international exchange organization and market infrastructure provider. We empower investors, financial institutions, and companies by facilitating access to global capital markets. Our business areas cover the entire financial market transaction process chain, including trading, clearing, settlement and custody, digital assets and crypto, market analytics, and advanced electronic systems. As a technology-driven company, we develop and operate cutting-edge IT solutions globally. About Deutsche Börse Group in India Our presence in Hyderabad serves as a key strategic hub, comprising India’s top-tier tech talent. We focus on crafting advanced IT solutions that elevate market infrastructure and services. Together with our colleagues from across the globe, we are a team of highly skilled capital market engineers forming the backbone of financial markets worldwide. We harness the power of innovation in leading technology to create trust in the markets of today and tomorrow. Senior Power BI Analyst Division Deutsche Börse AG, Chief Information Officer/Chief Operating Officer (CIO/COO), Chief Technology Officer (CTO), Plan & Control Field of activity: The Deutsche Börse CTO develops and runs the groupwide Information Technology (IT) infrastructure, develops and operates innovative IT products and offers services to the rest of the Group upon which they can build. The CTO area plays a significant role in the achieving the Group’s strategic goals by leading transformation and supporting a stable operating environment. The Plan & Control unit supplies reliable management information to the CTO and enables the other delivery units within the area to focus on their core activities by supplying central administration and coordination within the area. The successful candidate will support the Plan & Control unit in carrying out its responsibilities. Your area of work: The Deutsche Börse CTO area develops and runs the groupwide Information Communication Technology (ICT) infrastructure, develops and operates innovative IT products and offers services to the rest of the Group upon which they can build. The CTO area plays a significant role in achieving the Group’s strategic goals by leading transformation and supporting a stable operating environment. The Plan & Control unit supplies reliable management information to the CTO and enables the other delivery units within the area to focus on their core activities by supplying central administration and coordination within the area. The successful candidate will support the Plan & Control unit in carrying out its responsibilities. Your responsibilities: Design and develop BI solutions: Translate business requirements into technical specifications for BI reports, dashboards, and analytical tools, ensuring alignment with overall data architecture and governance. Implement and maintain BI infrastructure: Oversee the implementation, configuration, and ongoing maintenance of data pipelines ensuring system stability, performance, and security. Conduct data analysis and validation: Perform rigorous data analysis to identify trends, patterns, and insights, validating data accuracy, completeness, and consistency across different sources. Develop and execute test plans: Create comprehensive test plans and test cases for BI solutions, ensuring data quality, report accuracy, and functionality across various scenarios and user groups. Collaborate with stakeholders: Work closely with business units, IT teams, and data governance teams to gather requirements, provide support, and ensure effective communication and collaboration throughout the BI development lifecycle. Document and train: Develop comprehensive documentation for BI solutions, including user manuals, technical specifications, and training materials for end-users and support teams. Support the collection, consolidation, analysis and reporting of key performance indicators from across Deutsche Börse Group. Your profile: Power BI Desktop Proficiency: Mastery in data modeling, creating relationships between tables, using DAX for calculated columns and measures, building interactive visualizations, and designing reports and dashboards. Data Source Connectivity: Experience connecting to various data sources, including databases (SQL Server, Oracle, etc.), cloud platforms (Azure, GCP), flat files (CSV, Excel), and APIs. ETL/Data Wrangling: Skills in data transformation and cleaning is crucial. DAX (Data Analysis Expressions): Demonstrable expertise in writing complex DAX expressions for calculations, aggregations, and filtering data is essential. Problem-Solving: Ability to troubleshoot issues, identify root causes, and implement solutions related to data quality, report performance, or other BI-related challenges. Communication: Excellent written and verbal communication skills to effectively interact with technical and non-technical stakeholders. Ability to explain complex technical concepts in a clear and concise manner. Collaboration: Ability to work effectively in a team environment and collaborate with other developers, business analysts, and end-users. Time Management and Prioritization: Ability to manage multiple tasks and prioritize workload effectively to meet deadlines. Expertise working with office applications (Word, SharePoint, Excel, etc.). Proficiency in written and spoken English, German skills a benefit. A relevant degree, or equivalent, in business, business administration, finance, accounting, communications or IT.

Posted 3 weeks ago

Apply

0 years

0 Lacs

gurugram, haryana, india

On-site

Role- Big Data Tester AWS + SQL Budget 11 L Location - Gurugram/Bangalore (Hybrid) 📌 Big Data Tester (AWS + SQL) 🎯 Role Overview A Big Data Tester ensures that huge datasets are correctly stored, processed, and transformed within data pipelines. With AWS + SQL, the tester focuses on validating cloud-based big data systems and verifying data integrity using SQL queries. 📌 Key Skills 1. SQL Skills Strong SQL (joins, aggregations, window functions, subqueries) Data validation across source → staging → warehouse Performance testing of queries on large datasets 2. AWS Cloud Skills S3 → validate raw and processed data files AWS Glue → test ETL jobs and transformations Athena → run SQL queries directly on S3 data Redshift → data warehouse validation EMR → validate Spark/Hive jobs Kinesis (if real-time) → test streaming data pipelines CloudWatch → check logs and job failures 3. Testing Focus Areas Functional Testing → ETL jobs, schema validation, transformations Data Quality Testing → completeness, accuracy, duplicates, referential integrity Performance Testing → query response times, batch job execution times API Testing → REST APIs for ingestion using Postman/RestAssured Automation Testing → using Python/PySpark for data validation

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

pune, maharashtra, india

On-site

Where Data Does More. Join the Snowflake team. Snowflake Support is committed to providing high-quality resolutions to help deliver data-driven business insights and results. We are a team of subject matter experts collectively working toward our customers’ success. We form partnerships with customers by listening, learning, and building connections. Snowflake’s Support team is expanding! We are looking for a Staff Cloud Support Engineer to join our team. As a Staff Cloud Support Engineer, your role is to delight our customers with your passion and knowledge of the Snowflake AI Data Cloud. Customers will look to you for technical guidance and expert advice regarding their effective and optimal use of Snowflake. You will be the voice of the customer regarding product feedback and improvements for Snowflake’s product and engineering teams. You will develop a strong understanding of the customer’s use case and how they leverage the Snowflake platform. You will deliver exceptional service, enabling them to achieve the highest levels of continuity and performance from their Snowflake implementation. You will play an integral role in building knowledge within the team and be part of strategic initiatives for organizational and process improvements. Based on business needs, you may be assigned to work with one or more Snowflake Priority Support customers. You will develop a strong understanding of the customer’s use case and how they leverage the Snowflake platform. You will deliver exceptional service, enabling them to achieve the highest levels of continuity and performance from their Snowflake implementation. Ideally, you have worked in a 24x7 environment, handled technical case escalations and incident management, worked in technical support, been on-call during weekends, and are familiar with database release management. AS A STAFF CLOUD SUPPORT ENGINEER AT SNOWFLAKE, YOU WILL: Drive technical solutions to complex problems, providing in-depth analysis and guidance to Snowflake customers and partners using the following methods of communication: email, web, and phone Adhere to response and resolution SLAs and escalation processes in order to ensure fast resolution of customer issues that exceed expectations Demonstrate good problem-solving skills and be process-oriented Utilize the Snowflake environment, connectors, 3rd party partners for software, and tools to investigate issues Document known solutions to the internal and external knowledge base Submit well-documented bugs and feature requests arising from customer-submitted requests and partner with Engineering towards a resolution. Proactively identify recommendations and lead global initiatives to improve product quality, customer experience, and team efficiencies. Provide support coverage during holidays and weekends based on business needs OUR IDEAL STAFF CLOUD SUPPORT ENGINEER WILL HAVE: Bachelor’s or Master’s degree in Computer Science or equivalent discipline 8+ years of experience in a Technical Support environment or a similar technical function in a customer-facing role Excellent written and verbal communication skills in English with attention to detail Ability to work in a highly collaborative environment across global teams Ability to train team members on data warehousing fundamentals and concepts A clear understanding of data warehousing fundamentals and concepts Ability to debug, rewrite, and troubleshoot complex SQL queries for achieving workarounds or better solutions Strong knowledge of RDBMS, SQL data types, aggregations, and functions including analytical/window functions Good understanding of RDBMS query profiles and execution plans to analyze query performance and make recommendations for improvement A clear understanding of Operating System internals, memory management, CPU management Database migration and ETL experience Scripting/coding experience in any programming language Working knowledge of semi-structured data Experience in RDBMS workload management. Good understanding of any of the major cloud service provider’s ecosystem Ability to interpret systems performance metrics (CPU, I/O, RAM, Network stats). Understanding of the release cycle and tracking of behavior changes. NICE TO HAVE: Experience working with a distributed database i.e. big data and/or MPP (massively parallel processing) databases. Understanding of cost utilization and optimization. Proficiency in using any of the scripting languages e.g. Python, JavaScript. MANDATORY REQUIREMENTS FOR THE ROLE:: The position may require access to U.S. export-controlled technologies, technical data, or sensitive government data. Employment with Snowflake is contingent on Snowflake verifying that you: (i) may legally access U.S. export-controlled technologies, technical data, or sensitive government data; or (ii) are eligible to obtain, in a timely manner, any necessary license or other authorization(s) from the U.S. Government to allow access to U.S. export-controlled technology, technical data, or sensitive government data. Ability to work the 4th/night shift which typically starts from 10 pm IST Applicants should be flexible with schedule changes to meet business needs Snowflake is growing fast, and we’re scaling our team to help enable and accelerate our growth. We are looking for people who share our values, challenge ordinary thinking, and push the pace of innovation while building a future for themselves and Snowflake. How do you want to make your impact? For jobs located in the United States, please visit the job posting on the Snowflake Careers Site for salary and benefits information: careers.snowflake.com

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

india

Remote

Mars Data Hiring Part time Data Engineer (SQL+Databricks) multiple positions in remote locations Job Title: Data Engineer (SQL+Databricks) Experience Required: 5+ years (2+ years in Databricks preferred) Timings: (EST, minimum 10 hrs/week) Skills: Databricks, SQL, complex joins, aggregations (aggregate + analytic/window functions), CTEs, and recursive CTEs, JSON, Parquet, arrays, custom types, Data normalized/denormalize, explode function, Key Capabilities Required: Advanced SQL knowledge and syntax Expertise in complex joins, including lateral joins Strong command of aggregations (aggregate + analytic/window functions) Experience with subqueries, CTEs, and recursive CTEs Parsing and working with nested data (JSON, Parquet, arrays, custom types) Handling normalized/denormalized data, wide tables, facts, and dimensions Ability to perform statistical analysis Proficiency in using the explode function Strong understanding of data normalization principles  Share your resume to hr@marsdata.in

Posted 3 weeks ago

Apply

0.0 - 2.0 years

2 - 4 Lacs

surat, gujarat

Remote

MERN Stack Developer Wanted | Surat Job OverviewA MERN Stack Developer is responsible for designing, developing, and maintaining dynamic and scalable web applications using MongoDB, Express.js, React.js, and Node.js. The role involves working on both the front-end and back-end, ensuring seamless integration, optimized performance, and a smooth user experience.This position requires someone who is proficient in JavaScript, has a strong grasp of RESTful APIs, database management, and modern web technologies, and is capable of delivering high-quality solutions in a fast-paced environment.Key Responsibilities1. Web Application Development Develop, test, and deploy responsive, scalable, and user-friendly web applications using the MERN stack. Build reusable components and front-end libraries for future use. Ensure high performance and responsiveness of the applications across different devices and platforms. Optimize applications for speed, performance, and security. 2. Backend Development Develop RESTful APIs using Node.js and Express.js for seamless client-server communication. Handle data modeling, database design, and queries in MongoDB. Implement authentication and authorization systems using tools like JWT, OAuth, or Passport.js. Ensure robust integration of third-party APIs and services. 3. Frontend Development Build interactive and responsive user interfaces using React.js and associated libraries. Manage application state using tools like Redux, Context API, or Zustand. Ensure cross-browser compatibility and mobile responsiveness. Follow modern UI/UX principles to enhance usability and overall user experience. 4. Database Management Design, manage, and maintain NoSQL databases in MongoDB. Write optimized queries and aggregations to handle large-scale data efficiently. Perform database backup, recovery, and performance tuning when required. Ensure data security and integrity through best practices. 5. Testing & Debugging Conduct unit, integration, and end-to-end testing to ensure bug-free and high-quality code. Use tools like Jest, Mocha, Cypress, or Jasmine for automated testing. Identify and resolve performance bottlenecks and security vulnerabilities. 6. Deployment & Maintenance Deploy applications on cloud platforms such as AWS, Azure, or Google Cloud. Set up CI/CD pipelines using tools like Jenkins, GitHub Actions, or GitLab CI. Monitor applications for scalability, performance, and uptime. Provide ongoing support, troubleshooting, and feature enhancements. 7. Collaboration & Documentation Work closely with UI/UX designers, backend developers, and product managers to deliver seamless applications. Participate in scrum meetings, sprint planning, and code reviews. Maintain clear and concise technical documentation for future development.Required Skills & Competencies1. Technical Skills Proficiency in JavaScript (ES6+) and TypeScript (optional but preferred). Strong expertise in the MERN stack: MongoDB – NoSQL database design and queries. Express.js – Backend API development. React.js – Frontend frameworks and UI development. Node.js – Server-side scripting and API integration. Knowledge of HTML5, CSS3, and modern CSS frameworks (Bootstrap, Tailwind CSS, Material UI). Understanding of RESTful APIs and GraphQL. Familiarity with version control tools like Git and GitHub/GitLab. Experience in Docker, Kubernetes, or other containerization tools is a plus. 2. Soft Skills Strong problem-solving and analytical abilities. Good communication skills for cross-team collaboration. Ability to work independently as well as in a team environment. Attention to detail and commitment to writing clean, maintainable code. Time management skills to meet project deadlines.More information about this MERN Stack Developer JobPlease go through the below FAQs to get all answers related to the given MERN Stack Developer jobWhat are the job requirements to apply for this MERN Stack Developer job position? Ans: A candidate must have a minimum of 06 Months to 2 year experience as an MERN Stack Developer What is the qualification for this job? Ans: The candidate can be a Graduate from any of the following: BE/B.Tech, CS What is the hiring Process of this job? Ans: The hiring process all depends on the company. Normally for an entry level, hiring the candidate has to go for Aptitude, GD (If they look for communication),Technical test and face to face interviews.This MERN Stack Developer is a work from home job? Ans: No ,its not a Work from Home Job. How many job vacancies are opening for the MERN Stack Developer position? Ans: There are immediate 1 job openings for MERN Stack Developer in our Organisation.

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

kozhikode, kerala, india

Remote

Important: Preferred candidate location: Kerala, India. Priority for Calicut (Kozhikode), Kannur, Malappuram, Kochi. Salary range: 22,000 - 32,000 INR per month (Based on skills and experience) About us Wallo Pay is a cutting-edge fintech start-up company dedicated to transforming the way people and businesses handle payments. Our mission is to provide secure, fast, and user-friendly payment solutions that empower users with more disposable income and business profitability. By leveraging advanced technologies and a customer-centric approach, Wallo Pay aims to simplify financial transactions while maintaining the highest standards of security and compliance. The role Join Wallo Pay as a Developer for a 6-month contract in a fully remote role reporting directly to the CTO and COO. You’ll work end‑to‑end on an Ionic Angular front‑end and an AWS‑based, serverless back‑end underpinned by MongoDB Atlas. We’re a small, high‑impact team that writes clean, object‑oriented TypeScript and lives by SOLID. Key Responsibilities Feature Development – Deliver new customer‑facing features in Ionic Angular (v4+) using Capacitor plugins where needed. Serverless APIs – Design, build, and maintain scalable AWS Lambda (Node.js/TypeScript) functions behind API Gateway. Data Layer – Model MongoDB documents, craft performant queries & aggregations, and manage Atlas clusters. Testing First – Drive a TDD culture: author unit tests (Jasmine/Karma, Jest) and automated E2E suites (Cypress/Playwright/Appium) that run in CI & AWS Device Farm. Code Quality – Apply SOLID principles, OO design patterns, and rigorous code reviews to keep technical debt low. CI/CD & DevOps – Contribute to GitHub Actions / AWS CodeBuild pipelines, infra‑as‑code (Terraform or CDK), and cloud observability (CloudWatch, X‑Ray). Security & Compliance – Follow DevSecOps best practices and assist with PCI‑DSS considerations for payment flows. Required Skills & Experience Experience: ~2 years professional software development (mobile or web). TypeScript Expertise: Advanced knowledge of TypeScript & modern ES features. Ionic Angular or Ionic React: Hands‑on building and shipping Ionic/Angular (v4+) or Ionic React apps. OO & SOLID: Proven application of object‑oriented design and SOLID principles in production code. AWS or Node.js: Practical experience with AWS Lambda OR back‑end development in Node.js. Testing & TDD: Comfortable with unit‑test frameworks (Jasmine/Karma/Jest) and E2E tools (Cypress/Playwright/Appium); able to work test‑first. MongoDB: Working knowledge of MongoDB schema design, indexing, and performance tuning. Version Control: Git / GitHub flow and peer code review experience. Nice‑to‑Have Exposure to payment gateways (Stripe, Braintree) and PCI‑DSS fundamentals. Infrastructure as Code (Terraform, AWS CDK) and containerisation (Docker). Experience automating mobile CI/CD and distributing to App/Play Stores. Knowledge of OWASP Mobile Top 10 and secure coding practices. Core Competencies Strong problem‑solving, debugging, and analytical skills. Clear written and verbal communication; ability to articulate technical decisions. Growth mindset and collaborative attitude within cross‑functional agile teams. Why join Wallo Pay Work on cutting-edge fintech solutions that prioritize security and user experience in a fully remote environment, offering flexibility to work from anywhere. Collaborate with a global, passionate, tight-knit team dedicated to quality and innovation during this 6-month contract with potential for future opportunities. Grow your skills in a modern tech stack with opportunities to impact end-to-end development. Joint a project that can create a huge market impact.

Posted 3 weeks ago

Apply

5.0 - 12.0 years

0 Lacs

chennai, tamil nadu, india

On-site

Position Description Job Title: ETL Testing Engineer Experience: 5-12 Years Location: Bangalore Employment Type: Full-time Job Summary: We are looking for an experienced ETL/SQL/Python Testing Engineer to design, develop, and execute test cases for data pipelines, ETL processes, and database systems. The ideal candidate should have strong expertise in SQL querying, ETL validation, Python scripting, and test automation to ensure data accuracy, completeness, and performance. Technical Skills Required: ETL Testing: Strong experience in testing Informatica, Talend, SSIS, Databricks, or similar ETL tools. SQL: Advanced SQL skills (joins, aggregations, subqueries, stored procedures). Python: Proficiency in Python for test automation (Pandas, PySpark, PyTest). Databases: Hands-on experience with RDBMS (Oracle, SQL Server, PostgreSQL) & NoSQL (MongoDB, Cassandra). Big Data Testing (Good to Have): Hadoop, Hive, Spark, Kafka. Testing Tools: Knowledge of Selenium, Airflow, Great Expectations, or similar frameworks. Version Control: Git, GitHub/GitLab. CI/CD: Jenkins, Azure DevOps, or similar. Soft Skills: Strong analytical and problem-solving skills. Ability to work in Agile/Scrum environments. Good communication skills for cross-functional collaboration. Your future duties and responsibilities Develop and execute ETL test cases to validate data extraction, transformation, and loading processes. Write complex SQL queries to verify data integrity, consistency, and correctness across source and target systems. Automate ETL testing workflows using Python, PyTest, or other testing frameworks. Perform data reconciliation, schema validation, and data quality checks. Identify and report data anomalies, performance bottlenecks, and defects. Work closely with Data Engineers, Analysts, and Business Teams to understand data requirements. Design and maintain test data sets for validation. Implement CI/CD pipelines for automated ETL testing (Jenkins, GitLab CI, etc.). Document test results, defects, and validation reports. Required Qualifications To Be Successful In This Role Experience with cloud platforms (AWS, Azure). Knowledge of Data Warehousing concepts (Star Schema, Snowflake Schema). Certification in ETL Testing, SQL, or Python is a plus. Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.

Posted 4 weeks ago

Apply

7.0 years

18 Lacs

india

On-site

Job description Job Description : EXP : 7 Years Location : Hyderabad We are seeking a skilled and dynamic Azure Data Engineer to join our growing data engineering team. The ideal candidate will have a strong background in building and maintaining data pipelines and working with large datasets on the Azure cloud platform. The Azure Data Engineer will be responsible for developing and implementing efficient ETL processes, working with data warehouses, and leveraging cloud technologies such as Azure Data Factory (ADF), Azure Databricks, PySpark, and SQL to process and transform data for analytical purposes. Key Responsibilities : - Data Pipeline Development : Design, develop, and implement scalable, reliable, and high-performance data pipelines using Azure Data Factory (ADF), Azure Databricks, and PySpark. - Data Processing : Develop complex data transformations, aggregations, and cleansing processes using PySpark and Databricks for big data workloads. - Data Integration : Integrate and process data from various sources such as databases, APIs, cloud storage (e.g., Blob Storage, Data Lake), and third-party services into Azure Data Services. - Optimization : Optimize data workflows and ETL processes to ensure efficient data loading, transformation, and retrieval while ensuring data integrity and high performance. - SQL Development : Write complex SQL queries for data extraction, aggregation, and transformation. Maintain and optimize relational databases and data warehouses. - Collaboration : Work closely with data scientists, analysts, and other engineering teams to understand data requirements and design solutions that meet business and analytical needs. - Automation & Monitoring : Implement automation for data pipeline deployment and ensure monitoring, logging, and alerting mechanisms are in place for pipeline health. - Cloud Infrastructure Management : Work with cloud technologies (e.g., Azure Data Lake, Blob Storage) to store, manage, and process large datasets. - Documentation & Best Practices : Maintain thorough documentation of data pipelines, workflows, and best practices for data engineering solutions. Job Type: Full-time Pay: Up to ₹1,851,579.49 per year

Posted 4 weeks ago

Apply

1.0 - 2.0 years

0 Lacs

chennai

On-site

Job Title: Data Analyst Experience: 1-2 years Job Description: Write and optimize SQL queries to extract analyze and manage data from various sources Handle excel based reporting, data manipulation and automation tasks Support and maintain ETL workflows Collaborate with stakeholders to understand data requirements and provide actionable insights Ensure data quality, consistency and integrity across systems. Preferred Qualifications Proficiency in SQL (writing queries, joins, aggregations, etc.) Strong knowledge of Ms Ecel, including formulas, pivot, and data analysis Familiarity with or experience in Excel VBA for automation is a plus Knowledge in Talend Open Studio or similar ETL tools added advantage Excellent analytical, problem-solving skills Ability to work independently and manage multiple priorities effectively

Posted 4 weeks ago

Apply

3.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Coursera was launched in 2012 by Andrew Ng and Daphne Koller with a mission to provide universal access to world-class learning. It is now one of the largest online learning platforms in the world, with 183 million registered learners as of June 30, 2025 . Coursera partners with over 350 leading university and industry partners to offer a broad catalog of content and credentials, including courses, Specializations, Professional Certificates, and degrees. Coursera's platform innovations enable instructors to deliver scalable, personalized, and verified learning experiences to their learners. Institutions worldwide rely on Coursera to upskill and reskill their employees, citizens, and students in high-demand fields such as GenAI, data science, technology, and business. Coursera is a Delaware public benefit corporation and a B Corp. Join us in our mission to create a world where anyone, anywhere can transform their life through access to education. We're seeking talented individuals who share our passion and drive to revolutionize the way the world learns. At Coursera, we are committed to building a globally diverse team and are thrilled to extend employment opportunities to individuals in any country where we have a legal entity. We require candidates to possess eligible working rights and have a compatible timezone overlap with their team to facilitate seamless collaboration. Coursera has a commitment to enabling flexibility and workspace choices for employees. Our interviews and onboarding are entirely virtual, providing a smooth and efficient experience for our candidates. As an employee, we enable you to select your main way of working, whether it's from home, one of our offices or hubs, or a co-working space near you. About the Role We at Coursera are seeking a highly skilled and collaborative Data Analyst to join our analytics and insights team. The ideal candidate will have 3+ years of experience, with a strong focus on leveraging data and predictive modeling to drive impactful business decisions. This role offers a unique opportunity to work at the intersection of data analysis and data science—building robust dashboards, performing deep-dive analyses, and creating forecasting models to inform strategic initiatives and fuel growth. Key Responsibilities Conduct in-depth data analyses to uncover trends, identify opportunities, and inform key business strategies across Product, CS, and Finance. Develop, maintain, and optimize dashboards, reports, and self-serve data products. Collaborate with stakeholders to define and measure critical KPIs. Build and validate predictive models for churn risk, revenue forecasting, and growth opportunities. Use NLP and AI models to analyze unstructured data (e.g., support tickets, sentiment data, executive engagement) and extract actionable themes and signals. Drive insights-based storytelling—translating data into clear, impactful recommendations. Partner closely with data engineering and product teams to ensure data integrity and enable decision-making. Qualifications Education: Bachelor's degree in Statistics, Data Science, Computer Science, Economics, or a related quantitative field Experience: 3+ years of experience in data analysis, data science, or analytics roles Proven ability to drive insightful analysis and impactful recommendations Experience building predictive models for revenue forecasting, churn risk, or related areas Technical Skills: Proficiency in SQL for data extraction, data cleaning, and transformation. Strong programming skills in Python (Pandas, NumPy, Scikit-Learn, etc.) for data analysis and model development. Experience with data visualization tools (e.g., Tableau, Looker, Power BI) to build clear, actionable dashboards. Ability to apply statistical methods and A/B testing frameworks to solve business problems. Familiarity with NLP/AI techniques to extract insights from unstructured data such as support tickets, customer feedback, and sentiment data. Understanding of data engineering principles to ensure data quality and readiness for analysis. Comfort working with large datasets and performing complex joins, aggregations, and data modeling. Coursera is an Equal Employment Opportunity Employer and considers all qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, age, marital status, national origin, protected veteran status, disability, or any other legally protected class. If you are an individual with a disability and require a reasonable accommodation to complete any part of the application process, please contact us at accommodations@coursera.org. For California Candidates, please review our CCPA Applicant Notice here. For our Global Candidates, please review our GDPR Recruitment Notice here.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies