Jobs
Interviews

6 Data Framework Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 - 12.0 years

25 - 35 Lacs

Bengaluru

Work from Office

QA Lead - Automation & Strategy We are looking for a strategic and hands-on QA Lead who brings a balance of strong technical automation capabilities and leadership excellence. This role is ideal for someone who can define QA vision, influence quality culture across teams, and implement robust test strategies for modern, cloud-based systems. The ideal candidate will own quality outcomes, collaborate across teams, and lead the charge on both functional and non-functional validation. Job Responsibilities: Lead and mentor QA engineers across automation, manual testing areas. Define, drive, and continuously improve automation strategy across multiple projects. Review and approve test strategies, test plans, and test cases. Own test governance including traceability, risk management, and release sign-off. Collaborate closely with business, product, development, and DevOps teams to ensure quality from requirements to release. Proactively identify quality gaps and implement preventive measures. Contribute to and review automation architecture and frameworks (Selenium, Playwright, Python, Java). Drive automation-first mindset and integrate automated tests into CI/CD pipelines. Provide estimations, effort planning, and resource allocation for QA deliverables. Drive root cause analysis for defects and production incidents; implement corrective and preventive actions. Ensure test coverage for backend validations using SQL and ETL checks. Oversee cloud-based testing strategies for applications hosted on Azure, AWS, or GCP. Ensure adherence to Agile best practices and continuously optimize QA processes. . Required Skills: 10-12 years of overall experience in Quality Assurance, with at least 3 years in a QA Lead role. Intermediate-level proficiency in Python for scripting and automation purposes. Strong hands-on experience in test automation using tools like Selenium, Playwright, or equivalent frameworks. Demonstrated leadership and decision-making skills in fast-paced, agile environments. Excellent communication and stakeholder engagement abilities across cross-functional teams. Deep expertise ETL, Web Application testing. Practical experience working with cloud platforms such as Azure, AWS, or GCP. Excellent analytical and problem-solving skills, with the ability to understand complex data relationships. Strong communication and interpersonal skills, with the ability to collaborate effectively with cross-functional teams and present technical concepts to non-technical stakeholders. Ideal Background Experience working in data-driven environments (ETL, DWH, Cloud Data Pipelines). ABOUT US: OUR WORKPLACE IS FUN AND FAST-PACED : We are Cervello. We believe in the power of connected data. We are laser focused on helping organizations harness the interconnectedness of digital, data and decision-making. We are problem solvers and builders focused on helping our clients win with data. Our culture is cool and innovative. Our environment is casual and conducive to collaboration and problem solving.We take our work seriously but not ourselves.Its the perfect balance of freedom and accountability. If you want to be part of something great join us! Equal Employment Opportunity and Nondiscrimination Cervello prides itself on providing a culture that allows employees to bring their best selves to work every day. Our people can feel comfortable, confident, and joyful to do great things for our firm, our teams, and our clients. Cervello aims to build diverse capabilities to help our clients solve their most mission critical problems. Cervello is committed to building a diverse, unbiased, and inclusive workforce. Cervello is an equal opportunity employer; we recruit, hire, train, promote, develop, and provide other conditions of employment without regard to a persons gender identity or expression, sexual orientation, race, religion, age, national origin, disability, marital status, pregnancy status, veteran status, genetic information, or any other differences consistent with applicable laws. This includes providing reasonable accommodation for disabilities, or religious beliefs and practices. Members of communities historically underrepresented in analytics and consulting are encouraged to apply. Revised 23/7/2025

Posted 1 week ago

Apply

2.0 - 5.0 years

5 - 15 Lacs

Bengaluru

Work from Office

Design, develop, and optimize SQL queries for PostgreSQL, MySQL, and Snowflake. maintain ETL pipelines using Python for data processing and migration

Posted 1 month ago

Apply

6.0 - 11.0 years

15 - 30 Lacs

Gurugram

Work from Office

Summary: Translate business requirements into specifications to implement workflow testing from potentially multiple data sources using Python solutions. Develop understanding of data structures including large volume & complex databases and implement automation solutions leveraging Python for quick data comparisons and exception analysis. Work in finance processes and systems. Skills Required: Degree qualification in information technology Minimum 4-5 years of post-qualification experience Strong knowledge of Python and other testing tools Comprehensive knowledge of automation testing Knowledge of Git Proficiency with various frameworks and understanding of API features Knowledge of various databases Familiarity with the architecture and features of different types of applications Understanding of CI/CD Intake call notes: The position involves contributing to the development of a fully automated product, with responsibilities centered around coding, testing automation, and architecture rather than manual testing. The ideal candidate will have strong expertise in Python, with hands-on experience in both backend development and testing frameworks. Proficiency with Django and API development is required, along with a solid understanding of building data-heavy applications and UI integration. This is a senior-level role suited for technically strong individuals with at least 8 years of experience (though we are open to less tenured but highly skilled candidates). We're seeking someone with a strong engineering background who can take ownership, contribute to architectural decisions, and collaborate closely with the development team. Candidates with prior managerial experience are welcome, provided they remain technically hands-on. Domain background is flexible, as long as the candidate brings a solid foundation in software engineering and a product-building mindset. If interested, please share your resume to sunidhi.manhas@portraypeople.com

Posted 1 month ago

Apply

5.0 - 10.0 years

16 - 20 Lacs

Hyderabad

Work from Office

We are Hiring Data Engineer for a US based IT Company Based in Hyderabad. Candidates with minimum 5 Years of experience in Data Engineering can apply. This job is for 1 year contract only Job Title: Data Engineer Location: Hyderabad CTC: Upto 20 LPA Experience: 5+ Years Job Overview: We are looking for a seasoned Senior Data Engineer with deep hands-on experience in Talend and IBM DataStage to join our growing enterprise data team. This role will focus on designing and optimizing complex data integration solutions that support enterprise-wide analytics, reporting, and compliance initiatives. In this senior-level position, you will collaborate with data architects, analysts, and key stakeholders to facilitate large-scale data movement, enhance data quality, and uphold governance and security protocols. Key Responsibilities: Develop, maintain, and enhance scalable ETL pipelines using Talend and IBM DataStage Partner with data architects and analysts to deliver efficient and reliable data integration solutions Review and optimize existing ETL workflows for performance, scalability, and reliability Consolidate data from multiple sourcesboth structured and unstructuredinto data lakes and enterprise platforms Implement rigorous data validation and quality assurance procedures to ensure data accuracy and integrity Adhere to best practices for ETL development, including source control and automated deployment Maintain clear and comprehensive documentation of data processes, mappings, and transformation rules Support enterprise initiatives around data migration , modernization , and cloud transformation Mentor junior engineers and participate in code reviews and team learning sessions Required Qualifications: Minimum 5 years of experience in data engineering or ETL development Proficient with Talend (Open Studio and/or Talend Cloud) and IBM DataStage Strong skills in SQL , data profiling, and performance tuning Experience handling large datasets and complex data workflows Solid understanding of data warehousing , data modeling , and data lake architecture Familiarity with version control systems (e.g., Git) and CI/CD pipelines Strong analytical and troubleshooting skills Effective verbal and written communication, with strong documentation habits Preferred Qualifications: Prior experience in banking or financial services Exposure to cloud platforms such as AWS , Azure , or Google Cloud Knowledge of data governance tools (e.g., Collibra, Alation) Awareness of data privacy regulations (e.g., GDPR, CCPA) Experience working in Agile/Scrum environments For further assistance contact/whatsapp: 9354909518 or write to priya@gist.org.in

Posted 1 month ago

Apply

6.0 - 8.0 years

15 - 25 Lacs

Hyderabad

Remote

Job Title : Data Engineer II Experience : 6+ Years Location : Remote (India) Job Type : Full-time Job Description : We are looking for a highly skilled Data Engineer II with 6+ years of experience, including at least 4 years in data engineering or software development. The ideal candidate will be well-versed in building scalable data solutions using modern data ecosystems and cloud platforms. Key Responsibilities : Design, build, and optimize scalable ETL pipelines. Work extensively with Big Data technologies like Snowflake and Databricks . Write and optimize complex SQL queries for large datasets. Define and manage SLAs, performance benchmarks, and monitoring systems. Develop data solutions using the AWS Data Ecosystem , including S3 , Lambda , and more. Handle both relational (e.g., PostgreSQL) and NoSQL databases. Work with programming languages like Python , Java , and/or Scala . Use Linux command-line tools for system and data operations. Implement best practices in data lineage , data quality , data observability , and data discoverability . Preferred (Nice-to-Have) : Experience with data mesh architecture or building distributed data products. Prior exposure to data governance frameworks.

Posted 2 months ago

Apply

6 - 8 years

12 - 16 Lacs

Hyderabad

Remote

Job Title: Data Engineer Job Summary: Are you passionate about building scalable data pipelines, optimizing ETL processes, and designing efficient data models? We are looking for a Databricks Data Engineer to join our team and play a key role in managing and transforming data in Azure cloud environments. In this role, you will work with Azure Data Factory (ADF), Databricks, Python, and SQL to develop robust data ingestion and transformation workflows. Youll also be responsible for integrating, ,optimizing performance, and ensuring data quality & governance. If you have strong experience in big data processing, distributed computing (Spark), and data modeling, wed love to hear from you! Key Responsibilities: 1. Develop & Optimize ETL Pipelines : Build robust and scalable data pipelines using ADF, Databricks, and Python for data ingestion, transformation, and loading. 2. Data Modeling & Systematic Layer Modeling : Design logical, physical, and systematic data models for structured and unstructured data. 3. Database Management : Develop and optimize SQL queries, stored procedures, and indexing strategies to enhance performance. 4. Big Data Processi ng: Work with Azure Databricks for distributed computing, Spark for large-scale processing, and Delta Lake for optimized storage. 5. Data Quality & Governance : Implement data validation, lineage tracking, and security measures for high-quality, compliant data. 6. Collaboration : Work closely with business analysts, data scientists, and DevOps teams to ensure data availability and usability. 7. Testing and Debugging : Write unit tests and perform debugging to ensure the Implementation is robust and error-free. Conduct performance optimization and security audits. Required Skills and Qualifications: Azure Cloud Expertise: Strong experience in Azure Data Factory (ADF), Databricks, and Azure Synapse. Programming: Proficiency in Python for data processing, automation, and scripting. SQL & Database Skills: Advanced knowledge of SQL, T-SQL, or PL/SQL for data manipulation. Data Modeling: Hands-on experience in dimensional modeling, systematic layer modeling, and entity-relationship modeling. Big Data Frameworks: Strong understanding of Apache Spark, Delta Lake, and distributed computing. Performance Optimization: Expertise in query optimization, indexing, and performance tuning. Data Governance & Security: Knowledge of RBAC, encryption, and data privacy standards. Preferred Qualifications: Experience with CI/CD for data pipelines using Azure DevOps. Knowledge of Kafka/Event Hub for real-time data processing. Experience with Power BI/Tableau for data visualization (not mandatory but a plus).

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies