Jobs
Interviews

267 Aggregations Jobs - Page 3

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

5 - 9 Lacs

Gurgaon

On-site

Lead Assistant Manager EXL/LAM/1391732 ServicesGurgaon Posted On 01 Jul 2025 End Date 15 Aug 2025 Required Experience 3 - 7 Years Basic Section Number Of Positions 2 Band B2 Band Name Lead Assistant Manager Cost Code D010428 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type New Max CTC 1200000.0000 - 1800000.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group Analytics Sub Group Analytics - AUS & APAC Organization Services LOB Services SBU Analytics Country India City Gurgaon Center Gurgaon-SEZ BPO Solutions Skills Skill TABLEAU TABLEAU DEVELOPER SQL Minimum Qualification B.TECH/BE MCA MSC Certification No data available Job Description Tableau developer with BFSI domain experience (Preferred) and having experience of about 3+yrs in Tableau development. Good hand on experience on writing SQL queries. Have worked in Agile methodology. Key responsibilities: . Understanding the functional and technical specification. . Understands the basics of Data Modelling Understand Requirement, Analyzing Systems and Source Databases. Responsible for gathering requirements from the customer for developing reports. Provided Estimations for report based on complexity of reports. Designed, developed and implemented Tableau Business Intelligence reports in the latest version. Be vary of differences between old and new versions of Tableau Create basic calculations including string manipulation, basic arithmetic calculations, custom aggregations and ratios, date math, logic statements and quick table calculations. Workflow Workflow Type L&S-DA-Consulting

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description You’re ready to gain the skills and experience needed to grow within your role and advance your career — and we have the perfect software engineering opportunity for you. As a Software Engineer II at JPMorganChase within the Corporate Technology, you are part of an agile team that works to enhance, design, and deliver the software components of the firm’s state-of-the-art technology products in a secure, stable, and scalable way. As an emerging member of a software engineering team, you execute software solutions through the design, development, and technical troubleshooting of multiple components within a technical product, application, or system, while gaining the skills and experience needed to grow within your role. Job Responsibilities Supports review of controls to ensure sufficient protection of enterprise data Responsible for advising and making custom configuration changes in one to two tools to generate a product at the business or customer request Updates logical or physical data models based on new use cases Frequently uses SQL and understands NoSQL databases and their niche in the marketplace Develop the required data pipelines for moving the data from On-prem to AWS/Cloud platforms. Perform user acceptance testing and deliver demos to stakeholders by SQL queries or Python scripts. Perform data analysis to define / support model development including metadata and data dictionary documentation that will enable data analysis and analytical exploration Participate in strategic projects and provide ideas and inputs on ways to leverage quantitative analytics to generate actionable business insights and/or solutions to influence business strategies and identify opportunities to grow Partners closely with business partners to identify impactful projects, influence key decisions with data, and ensure client satisfaction Adds to team culture of diversity, equity, inclusion, and respect Work on innovative solutions using modern technologies and products to enhance customer experiences Required Qualifications, Capabilities, And Skills Formal training or certification on software engineering* concepts and 2+ years applied experience hands-on development experience and knowledge of Cloud, preferably AWS Cloud. Hands-on experience in migrating relational Databases to NoSQL/Bigdata in Cloud. Experience across the data lifecycle Advanced at SQL (e.g., joins and aggregations, SQL analytical functions). Hands on experience in handling JSON data in SQL. Working understanding of NoSQL databases like TigerGraph, MongoDB, or any other NoSQL DB. Hands on experience in building BigData warehouse using applications. Hands on experience with cloud computing, AWS. Experience with query processing and tuning reports. Experience with ETL and processing real-time data. Experience with Big data technologies like PySpark. Preferred Qualifications, Capabilities, And Skills Databricks experience of 1-2 years. PySpark experience of 3-4 years. ETL, Datawarehouse, Lakehouse experience of 3-4 years. About Us JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team Our Corporate Technology team relies on smart, driven people like you to develop applications and provide tech support for all our corporate functions across our network. Your efforts will touch lives all over the financial spectrum and across all our divisions: Global Finance, Corporate Treasury, Risk Management, Human Resources, Compliance, Legal, and within the Corporate Administrative Office. You’ll be part of a team specifically built to meet and exceed our evolving technology needs, as well as our technology controls agenda.

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Key Responsibilities: Tableau expert 6+ years of experience in Tableau (SME) Understanding the functional and technical specification. Understand Requirement, Analyzing Systems and Source Databases. Responsible for gathering requirements from the customer for developing reports. Provided Estimations for report based on complexity of reports. Designed, developed and implemented Tableau Business Intelligence reports in the latest version. Be vary of differences between old and new versions of Tableau Create basic calculations including string manipulation, basic arithmetic calculations, custom aggregations and ratios, date math, logic statements and quick table calculations. Creating presentation layers for dashboard development. Basel III (Basel 3) domain knowledge, IFRS9, ECL, RWAs calculation, capital calculations Knowledge of banking products, related metrics in credit risk, regulatory reporting etc. and how to present them in a dashboard Create attribution reports to explain pattern and analysis of key reported metrics Representing data using the visualizations such using Charts, Trend Lines, Reference Lines and statistical techniques to describe the data. Use Measure name and Measure Value fields to create visualizations with multiple measures and dimensions. Responsible for dashboard design, look and feel and development. Use parameters and input controls to give users control over certain values. Develop, organize, manage and maintain graph, table, slide and document templates that will allow for efficient creation of reports. Provide the demos to end user how run the reports and how downloads report from the connection and preparing the documents for same. Using the Framework Manager creating the Query Subjects and Query Items. Creating Transactional report cell-based reports and crosstab reports. Creating prompts and user defined SQLs and creating the job for scheduling reports. Creating report view and shortcuts. Liaising with other DB teams (e.g. Infrastructure / Database) where required in problem Investigation / resolution Skills: Must have: 6+ years of experience in analysis, design, development and testing of Business Intelligence applications Tableau Desktop and Server Tableau dashboard development and migration from old to new versions, migrating from excel to Tableau. Strong understanding of banking products such as mortgages, credit cards, loans and advances Basel III (Basel 3) domain knowledge and/or IFRS9, ECL, RWAs calculation, capital calculations Knowledge of banking products, related metrics in credit risk, regulatory reporting etc. and how to present them in a dashboard Hands on experience working on capital metrics like PD, EAD, LGD, RWA actuals calculations/interpretation, capital computations Awareness of APS 112, 113 and other relevant APRA regulations Self-driven, able to work independently, strong problem-solving skills along with excellent communication Good To Have Banking domain knowledge Business Analysis Jira and Confluence Tableau certification (candidates will be given preference) Candidate Profile Bachelors/Masters degree in computer science/engineering, operations research or related analytics areas Strong and in-depth understanding of Tableau and development skills Data analysis experience Superior analytical and problem solving skills Outstanding written and verbal communication skills Excellent Analytical, communication skills and management qualities working in a team and ability to communicate effectively at all levels of the development process. Self-starter with drive, initiative and a positive attitude. Able to meet very stringent deadlines and always deliver results, even under pressure. What We Offer EXL Analytics offers an exciting, fast paced and innovative environment, which brings together a group of sharp and entrepreneurial professionals who are eager to influence business decisions. From your very first day, you get an opportunity to work closely with highly experienced, world class analytics consultants. Potential to develop the contract with client in to a longer-term engagement with client or other roles in ANZ analytics practices. You can expect to learn many aspects of businesses that our clients engage in. You will also learn effective teamwork and time-management skills - key aspects for personal and professional growth. Analytics requires different skill sets at different levels within the organisation. At EXL Analytics, we invest heavily in training you in all aspects of analytics as well as in leading analytical tools and techniques. We provide guidance/ coaching to every employee through our mentoring/training program. Sky is the limit for our team members. The unique experiences gathered at EXL Analytics sets the stage for further growth and development in our company and beyond.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Tableau developer with BFSI domain experience (Preferred) and having experience of about 3+yrs in Tableau development. Good hand on experience on writing SQL queries. Have worked in Agile methodology. Key Responsibilities Understanding the functional and technical specification. Understands the basics of Data Modelling Understand Requirement, Analyzing Systems and Source Databases. Responsible for gathering requirements from the customer for developing reports. Provided Estimations for report based on complexity of reports. Designed, developed and implemented Tableau Business Intelligence reports in the latest version. Be vary of differences between old and new versions of Tableau Create basic calculations including string manipulation, basic arithmetic calculations, custom aggregations and ratios, date math, logic statements and quick table calculations.

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We will consider only those candidates who fill out this Google form: https://forms.gle/c6iTNX77J17FUY3x5 QA Automation Engineer | Hyderabad | Hybrid | We are hiring an experienced QA Automation Engineer to support enterprise-scale automation initiatives for a US-based multinational banking institution. This is a long-term, individual contributor opportunity requiring strong hands-on expertise in Selenium (Java), SQL, and API testing. Key Details Location: Hyderabad (Hybrid, 3 days/week from office) Type: Full-Time, Individual Contributor Client: US-based multinational banking institution Joining: Immediate preferred Must-Have Expertise 7+ years overall QA Automation experience with strong engineering ownership Extensive hands-on with Selenium WebDriver (Java) : page objects, explicit waits, exception handling for complex dynamic UIs Proficiency in SQL for multi-table joins, aggregations, and subqueries to validate backend data (Oracle/MySQL/SQL Server) Solid experience with API testing via Postman , including headers/auth tokens, status codes, payload assertions Working knowledge of JUnit/TestNG , Maven/Gradle, Git/GitHub, JIRA, and Agile/Scrum methodologies Ability to independently design, debug, and execute test automation suites Nice-to-Have Exposure to REST Assured for API automation Familiarity with BDD frameworks (Cucumber feature files) Awareness of CI/CD tools like Jenkins, GitHub Actions, and test reporting via Extent Reports , Allure Responsibilities Design, implement, and maintain automation scripts across UI and backend services Perform data validations with complex SQL queries Validate REST APIs using Postman Collaborate closely with developers, BAs, and product owners within Agile sprints Contribute to defect triaging, root cause analysis, and continuous improvement of automation frameworks We will consider only those candidates who fill out this Google form: https://forms.gle/c6iTNX77J17FUY3x5 If you are a skilled QA Automation Engineer looking to work on challenging, enterprise-scale solutions within a collaborative Agile environment, we’d love to connect!

Posted 2 weeks ago

Apply

4.0 years

20 Lacs

Hyderābād

On-site

Job Description : We are seeking a skilled and dynamic Azure Data Engineer to join our growing data engineering team. The ideal candidate will have a strong background in building and maintaining data pipelines and working with large datasets on the Azure cloud platform. The Azure Data Engineer will be responsible for developing and implementing efficient ETL processes, working with data warehouses, and leveraging cloud technologies such as Azure Data Factory (ADF), Azure Databricks, PySpark, and SQL to process and transform data for analytical purposes. Key Responsibilities : - Data Pipeline Development : Design, develop, and implement scalable, reliable, and high-performance data pipelines using Azure Data Factory (ADF), Azure Databricks, and PySpark. - Data Processing : Develop complex data transformations, aggregations, and cleansing processes using PySpark and Databricks for big data workloads. - Data Integration : Integrate and process data from various sources such as databases, APIs, cloud storage (e.g., Blob Storage, Data Lake), and third-party services into Azure Data Services. - Optimization : Optimize data workflows and ETL processes to ensure efficient data loading, transformation, and retrieval while ensuring data integrity and high performance. - SQL Development : Write complex SQL queries for data extraction, aggregation, and transformation. Maintain and optimize relational databases and data warehouses. - Collaboration : Work closely with data scientists, analysts, and other engineering teams to understand data requirements and design solutions that meet business and analytical needs. - Automation & Monitoring : Implement automation for data pipeline deployment and ensure monitoring, logging, and alerting mechanisms are in place for pipeline health. - Cloud Infrastructure Management : Work with cloud technologies (e.g., Azure Data Lake, Blob Storage) to store, manage, and process large datasets. - Documentation & Best Practices : Maintain thorough documentation of data pipelines, workflows, and best practices for data engineering solutions. Job Type: Full-time Pay: Up to ₹2,000,000.00 per year Experience: Azure: 4 years (Required) Python: 4 years (Required) SQL: 4 years (Required) Work Location: In person

Posted 2 weeks ago

Apply

3.0 - 6.0 years

0 Lacs

Andhra Pradesh

On-site

We are seeking a Data Engineer with strong expertise in SQL and ETL processes to support banking data quality data pipelines, regulatory reporting, and data quality initiatives. The role involves building and optimizing data structures, implementing validation rules, and collaborating with governance and compliance teams. Experience in the banking domain and tools like Informatica and Azure Data Factory is essential. Strong proficiency in SQL for writing complex queries, joins, data transformations, and aggregations Proven experience in building tables, views, and data structures within enterprise Data Warehouses and Data Lakes Strong understanding of data warehousing concepts, such as Slowly Changing Dimensions (SCDs), data normalization, and star/snowflake schemas Practical experience in Azure Data Factory (ADF) for orchestrating data pipelines and managing ingestion workflows Exposure to data cataloging, metadata management, and lineage tracking using Informatica EDC or Axon Experience implementing Data Quality rules for banking use cases such as completeness, consistency, uniqueness, and validity Familiarity with banking systems and data domains such as Flexcube, HRMS, CRM, Risk, Compliance, and IBG reporting Understanding of regulatory and audit readiness needs for Central Bank and internal governance forums Write optimized SQL scripts to extract, transform, and load (ETL) data from multiple banking source systems Design and implement staging and reporting layer structures, aligned to business requirements and regulatory frameworks Apply data validation logic based on predefined business rules and data governance requirements Collaborate with Data Governance, Risk, and Compliance teams to embed lineage, ownership, and metadata into datasets Monitor scheduled jobs and resolve ETL failures to ensure SLA adherence for reporting and operational dashboards Support production deployment, UAT sign off, and issue resolution for data products across business units 3 to 6 years in banking-focused data engineering roles with hands on SQL, ETL, and DQ rule implementation Bachelors or Master's Degree in Computer Science, Information Systems, Data Engineering, or related fields Banking domain experience is mandatory, especially in areas related to regulatory reporting, compliance, and enterprise data governance About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 2 weeks ago

Apply

3.0 - 6.0 years

0 Lacs

Andhra Pradesh

On-site

We are seeking a Data Engineer with strong expertise in SQL and ETL processes to support banking data quality data pipelines, regulatory reporting, and data quality initiatives. The role involves building and optimizing data structures, implementing validation rules, and collaborating with governance and compliance teams. Experience in the banking domain and tools like Informatica and Azure Data Factory is essential. Strong proficiency in SQL for writing complex queries, joins, data transformations, and aggregations Proven experience in building tables, views, and data structures within enterprise Data Warehouses and Data Lakes Strong understanding of data warehousing concepts, such as Slowly Changing Dimensions (SCDs), data normalization, and star/snowflake schemas Practical experience in Azure Data Factory (ADF) for orchestrating data pipelines and managing ingestion workflows Exposure to data cataloging, metadata management, and lineage tracking using Informatica EDC or Axon Experience implementing Data Quality rules for banking use cases such as completeness, consistency, uniqueness, and validity Familiarity with banking systems and data domains such as Flexcube, HRMS, CRM, Risk, Compliance, and IBG reporting Understanding of regulatory and audit readiness needs for Central Bank and internal governance forums Write optimized SQL scripts to extract, transform, and load (ETL) data from multiple banking source systems Design and implement staging and reporting layer structures, aligned to business requirements and regulatory frameworks Apply data validation logic based on predefined business rules and data governance requirements Collaborate with Data Governance, Risk, and Compliance teams to embed lineage, ownership, and metadata into datasets Monitor scheduled jobs and resolve ETL failures to ensure SLA adherence for reporting and operational dashboards Support production deployment, UAT sign off, and issue resolution for data products across business units 3 to 6 years in banking-focused data engineering roles with hands on SQL, ETL, and DQ rule implementation Bachelors or Master's Degree in Computer Science, Information Systems, Data Engineering, or related fields Banking domain experience is mandatory, especially in areas related to regulatory reporting, compliance, and enterprise data governance About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title: Senior Data Engineer - Apache Flink Job Specs : 7+ Years Location: Hyderabad, TG / Chennai, TN Work Mode: Hybrid Shifts: 2 PM - 11 PM Key Responsibilities: Role Summary: This role will be instrumental in building and maintaining robust, scalable, and reliable data pipelines using Confluent Kafka, ksqlDB, Kafka Connect, and Apache Flink. The ideal candidate will have a strong understanding of data streaming concepts, experience with real-time data processing, and a passion for building high-performance data solutions. This role requires excellent analytical skills, attention to detail, and the ability to work collaboratively in a fast-paced environment. Essential Responsibilities Design & develop data pipelines for real-time and batch data ingestion and processing using Confluent Kafka, ksqlDB, Kafka Connect, and Apache Flink. Build and configure Kafka Connectors to ingest data from various sources (databases, APIs, message queues, etc.) into Kafka. Develop Flink applications for complex event processing, stream enrichment, and real-time analytics. Develop and optimize ksqlDB queries for real-time data transformations, aggregations, and filtering. Implement data quality checks and monitoring to ensure data accuracy and reliability throughout the pipeline. Monitor and troubleshoot data pipeline performance, identify bottlenecks, and implement optimizations. Automate data pipeline deployment, monitoring, and maintenance tasks. Stay up-to-date with the latest advancements in data streaming technologies and best practices. Contribute to the development of data engineering standards and best practices within the organization. Participate in code reviews and contribute to a collaborative and supportive team environment. Work closely with other architects and tech leads in India & US and create POCs and MVPs Provide regular updates on the tasks, status and risks to project manager The experience we are looking to add to our team Required Bachelor’s degree or higher from a reputed university 7 to 10 years total experience with majority of that experience related to ETL/ELT, big data, Kafka etc. Proficiency in developing Flink applications for stream processing and real-time analytics. Strong understanding of data streaming concepts and architectures. Extensive experience with Confluent Kafka, including Kafka Brokers, Producers, Consumers, and Schema Registry. Hands-on experience with ksqlDB for real-time data transformations and stream processing. Experience with Kafka Connect and building custom connectors. Extensive experience in implementing large scale data ingestion and curation solutions Good hands on experience in big data technology stack with any cloud platform - Excellent problem-solving, analytical, and communication skills. Ability to work independently and as part of a team Good to have Experience in Google Cloud Healthcare industry experience Experience in Agile

Posted 2 weeks ago

Apply

3.0 - 4.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Title: SQL Developer Notice period - 30 Days Experience - 3-4 years Job Description: We are looking for a proficient SQL Developer to work alongside our software implementation and training teams. This role focuses on writing quick, accurate SQL queries to support data access, validation, and reporting needs during software rollouts and user training. The role involves creating and optimizing quick SQL queries to assist in data extraction, validation, and reporting during client onboarding and support sessions. The ideal candidate will work closely with cross-functional teams to ensure accurate, timely, and efficient data access to support successful deployments and client support issues. While the domain is healthcare, the emphasis is on strong SQL skills to support operational efficiency and successful system adoption. Key Responsibilities: Write and optimize ad hoc SQL queries to support implementation and training needs Assist in data analysis, migration, and validation tasks to resolve client needs Troubleshoot and resolve data-related issues quickly and effectively Collaborate with team members to provide insights and support for client-specific data needs Create and maintain documentation for frequently used queries, reports and processes Qualifications: Strong SQL skills (e.g., writing joins, subqueries, aggregations, performance tuning) Experience with relational databases such as SQL Server Ability to work in a fast-paced, collaborative environment Strong analytical and problem-solving skills Ability to communicate effectively with non-technical team members . Share Your Resume At dyadav@gorenvio.com or Sahsan@gorenvio.com

Posted 2 weeks ago

Apply

3.0 - 6.0 years

0 Lacs

Andhra Pradesh, India

On-site

We are seeking a Data Engineer with strong expertise in SQL and ETL processes to support banking data quality data pipelines, regulatory reporting, and data quality initiatives. The role involves building and optimizing data structures, implementing validation rules, and collaborating with governance and compliance teams. Experience in the banking domain and tools like Informatica and Azure Data Factory is essential. Strong proficiency in SQL for writing complex queries, joins, data transformations, and aggregations Proven experience in building tables, views, and data structures within enterprise Data Warehouses and Data Lakes Strong understanding of data warehousing concepts, such as Slowly Changing Dimensions (SCDs), data normalization, and star/snowflake schemas Practical experience in Azure Data Factory (ADF) for orchestrating data pipelines and managing ingestion workflows Exposure to data cataloging, metadata management, and lineage tracking using Informatica EDC or Axon Experience implementing Data Quality rules for banking use cases such as completeness, consistency, uniqueness, and validity Familiarity with banking systems and data domains such as Flexcube, HRMS, CRM, Risk, Compliance, and IBG reporting Understanding of regulatory and audit readiness needs for Central Bank and internal governance forums Write optimized SQL scripts to extract, transform, and load (ETL) data from multiple banking source systems Design and implement staging and reporting layer structures, aligned to business requirements and regulatory frameworks Apply data validation logic based on predefined business rules and data governance requirements Collaborate with Data Governance, Risk, and Compliance teams to embed lineage, ownership, and metadata into datasets Monitor scheduled jobs and resolve ETL failures to ensure SLA adherence for reporting and operational dashboards Support production deployment, UAT sign off, and issue resolution for data products across business units 3 to 6 years in banking-focused data engineering roles with hands on SQL, ETL, and DQ rule implementation Bachelors or Master's Degree in Computer Science, Information Systems, Data Engineering, or related fields Banking domain experience is mandatory, especially in areas related to regulatory reporting, compliance, and enterprise data governance

Posted 2 weeks ago

Apply

3.0 - 6.0 years

0 Lacs

Andhra Pradesh, India

On-site

We are seeking a Data Engineer with strong expertise in SQL and ETL processes to support banking data quality data pipelines, regulatory reporting, and data quality initiatives. The role involves building and optimizing data structures, implementing validation rules, and collaborating with governance and compliance teams. Experience in the banking domain and tools like Informatica and Azure Data Factory is essential. Strong proficiency in SQL for writing complex queries, joins, data transformations, and aggregations Proven experience in building tables, views, and data structures within enterprise Data Warehouses and Data Lakes Strong understanding of data warehousing concepts, such as Slowly Changing Dimensions (SCDs), data normalization, and star/snowflake schemas Practical experience in Azure Data Factory (ADF) for orchestrating data pipelines and managing ingestion workflows Exposure to data cataloging, metadata management, and lineage tracking using Informatica EDC or Axon Experience implementing Data Quality rules for banking use cases such as completeness, consistency, uniqueness, and validity Familiarity with banking systems and data domains such as Flexcube, HRMS, CRM, Risk, Compliance, and IBG reporting Understanding of regulatory and audit readiness needs for Central Bank and internal governance forums Write optimized SQL scripts to extract, transform, and load (ETL) data from multiple banking source systems Design and implement staging and reporting layer structures, aligned to business requirements and regulatory frameworks Apply data validation logic based on predefined business rules and data governance requirements Collaborate with Data Governance, Risk, and Compliance teams to embed lineage, ownership, and metadata into datasets Monitor scheduled jobs and resolve ETL failures to ensure SLA adherence for reporting and operational dashboards Support production deployment, UAT sign off, and issue resolution for data products across business units 3 to 6 years in banking-focused data engineering roles with hands on SQL, ETL, and DQ rule implementation Bachelors or Master's Degree in Computer Science, Information Systems, Data Engineering, or related fields Banking domain experience is mandatory, especially in areas related to regulatory reporting, compliance, and enterprise data governance

Posted 2 weeks ago

Apply

8.0 - 10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Role Expectations: Design & develop data pipelines for real-time and batch data ingestion and processing using Confluent Kafka, ksqlDB, Kafka Connect, and Apache Flink. Build and configure Kafka Connectors to ingest data from various sources (databases, APIs, message queues, etc.) into Kafka. Develop Flink applications for complex event processing, stream enrichment, and real-time analytics. Develop and optimize ksqlDB queries for real-time data transformations, aggregations, and filtering. Implement data quality checks and monitoring to ensure data accuracy and reliability throughout the pipeline. Monitor and troubleshoot data pipeline performance, identify bottlenecks, and implement optimizations. Automate data pipeline deployment, monitoring, and maintenance tasks. Stay up-to-date with the latest advancements in data streaming technologies and best practices. Contribute to the development of data engineering standards and best practices within the organization. Participate in code reviews and contribute to a collaborative and supportive team environment. Work closely with other architects and tech leads in India & US and create POCs and MVPs Provide regular updates on the tasks, status and risks to project manager The experience we are looking to add to our team Qualifications: Bachelor's degree or higher from a reputed university 8 to 10 years total experience with majority of that experience related to ETL/ELT, big data, Kafka etc. Proficiency in developing Flink applications for stream processing and real-time analytics. Strong understanding of data streaming concepts and architectures. Extensive experience with Confluent Kafka, including Kafka Brokers, Producers, Consumers, and Schema Registry. Hands-on experience with ksqlDB for real-time data transformations and stream processing. Experience with Kafka Connect and building custom connectors. Extensive experience in implementing large scale data ingestion and curation solutions Good hands on experience in big data technology stack with any cloud platform - Excellent problem-solving, analytical, and communication skills. Ability to work independently and as part of a team

Posted 2 weeks ago

Apply

2.0 - 4.0 years

0 Lacs

India

On-site

Project Description: Client is new age digital bank which uses latest technologies, best of the breed vendor applications. They use Axiom for regulatory reporting in south East Asia. Skills required: • 2-4 years of overall experience in Finance industry out of which minimum 2 years in Axiom Controller View • Good understanding of axiom objects / functionalities - Data Sources, Data Models, Shorthand's, Portfolios, Aggregations, Fee Form, Tabular Report, workflow, sign-off, freezing etc. • Strong knowledge of SQL, understanding of relational data modelling • Experience with any major relational database (Oracle, MSSQL, MySQL, SYBASE) • Familiarity with Linux, shell scripting • Good understanding and experience in client-server applications development • Good understanding of OOP and design patterns • Familiarity with Agile process Responsibilities: • Role expectation is to work on the technical aspect of the project, perform coding, UT, SIT, UAT, OAT, SAT Etc. • Coding and Unit Testing in Axiom application. • Working with Business and technology stakeholders, supporting SIT, UAT and Production implementation. • Support the production rollout and help support team during warranty. • Develop application (source code) based on specifications • Debug/modify the source code based on specifications • Provide inputs to the documentation team and review the changes in the user manuals for accuracy. • Performs thorough and comprehensive peer reviews on the output of other team members in a way that identifies to the maximum extent possible issues/errors in the output. • Provide support during high Severity and production DR process • Ensure SDLC process compliance

Posted 2 weeks ago

Apply

1.0 years

1 - 2 Lacs

Jaipur

On-site

About the Role We are seeking a proactive and detail-oriented Apache Superset & SQL Expert with 1+ years of experience in the healthcare domain. You’ll be responsible for building insightful BI dashboards and maintaining complex data pipelines to support mission-critical analytics for healthcare operations and compliance reporting. Key Responsibilities Develop and maintain advanced Apache Superset dashboards tailored for healthcare KPIs and operational metrics Write, optimise, and maintain complex SQL queries to extract and transform data from multiple healthcare systems Collaborate with data engineering and clinical teams to define and model datasets for visualisation Ensure dashboards comply with healthcare data governance, privacy (e.g., HIPAA), and audit requirements Monitor performance, implement row-level security, and maintain a robust Superset configuration Translate clinical and operational requirements into meaningful visual stories Required Skills & Experience 1+ years of domain experience in healthcare analytics or working with healthcare datasets (EHR, claims, patient outcomes, etc.) 3+ years of experience working with Apache Superset in a production environment Strong command over SQL, including query tuning, joins, aggregations, and complex transformations Hands-on experience with data modelling and relational database design Solid understanding of clinical terminology, healthcare KPIs, and reporting workflows Experience in working with PostgreSQL, MySQL, or other SQL-based databases Strong documentation, communication, and stakeholder-facing skills Nice-to-Have Familiarity with HIPAA, HL7/FHIR data structures, or other regulatory standards Experience with Python, Flask, or Superset plugin development Exposure to modern healthcare data platforms, dbt, or Airflow Experience integrating Superset with EMR, clinical data lakes, or warehouse systems like Redshift or BigQuery Job Type: Full-time Pay: ₹10,000.00 - ₹20,000.00 per month Schedule: Day shift Work Location: In person Expected Start Date: 19/07/2025

Posted 2 weeks ago

Apply

1.0 - 31.0 years

1 - 2 Lacs

Jaipur

On-site

🧠 About the Role We are seeking a proactive and detail-oriented Apache Superset & SQL Expert with 1+ years of experience in the healthcare domain. You’ll be responsible for building insightful BI dashboards and maintaining complex data pipelines to support mission-critical analytics for healthcare operations and compliance reporting. ✅ Key Responsibilities Develop and maintain advanced Apache Superset dashboards tailored for healthcare KPIs and operational metrics Write, optimise, and maintain complex SQL queries to extract and transform data from multiple healthcare systems Collaborate with data engineering and clinical teams to define and model datasets for visualisation Ensure dashboards comply with healthcare data governance, privacy (e.g., HIPAA), and audit requirements Monitor performance, implement row-level security, and maintain a robust Superset configuration Translate clinical and operational requirements into meaningful visual stories 🧰 Required Skills & Experience 1+ years of domain experience in healthcare analytics or working with healthcare datasets (EHR, claims, patient outcomes, etc.) 3+ years of experience working with Apache Superset in a production environment Strong command over SQL, including query tuning, joins, aggregations, and complex transformations Hands-on experience with data modelling and relational database design Solid understanding of clinical terminology, healthcare KPIs, and reporting workflows Experience in working with PostgreSQL, MySQL, or other SQL-based databases Strong documentation, communication, and stakeholder-facing skills 🌟 Nice-to-Have Familiarity with HIPAA, HL7/FHIR data structures, or other regulatory standards Experience with Python, Flask, or Superset plugin development Exposure to modern healthcare data platforms, dbt, or Airflow Experience integrating Superset with EMR, clinical data lakes, or warehouse systems like Redshift or BigQuery

Posted 3 weeks ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Overview: TekWissen is a global workforce management provider that offers strategic talent solutions to our clients throughout India and world-wide. Our client is a company operating a marketplace for consumers, sellers, and content creators. It offers merchandise and content purchased for resale from vendors and those offered by thirdparty sellers. Job Title: Data Analyst Location: Bangalore Job Type: Contract Work Type: Onsite Job Description: Roles and Responsibilities: Define Problem & Solution Framework Develop & apply domain/process expertise Translate basic business problem statements into analysis requirements Influence and implement specified analytical approach Work with clients to define best output based on expressed stakeholder needs Write queries and output efficiently Data Acquisition Have in-depth knowledge of the data available in area of expertise Work with structured data in a traditional data storage environment Pull the data needed with standard query syntax; periodically identify more advanced methods of query optimization Cross-check pulled data against other published sources to determine fidelity Analysis/Insight Solve well-defined tasks with clear requirements and limited ambiguity Utilize basic data-manipulation tools Derive actionable recommendations from analysis that impact a process or team Convert data to make it analysis- ready through basic descriptive, aggregations, and pivots Communication/Influence Implement/deploy data visualization or communication tools (e.g., metrics dashboards, decks, flashes) Communicate clearly to stakeholders on project requirements and status Communicate analysis and work with business stakeholders to understand its value Project Management Manage expectations; prioritize own workload and communicate status Provide visibility and updates to manager regarding project timeline and deliverables Escalate problems and roadblocks as needed Technical Skill Requirements: Data Manipulation (Excel, SQL) Data Visualization (Tableau, Quicksight) TekWissen® Group is an equal opportunity employer supporting workforce diversity.

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

Remote

🚀 We're Hiring: Senior ETL Tester (QA) – 5+ Years Experience 📍 Location: [GURGAON / Remote / Hybrid] 🕒 Employment Type: Full-Time 💼 Experience: 5+ Years 💰 Salary: [ Based on Experience] 📅 Joining: Immediate --- 🔍 About the Role: We’re looking for a Senior ETL Tester (QA) with 5+ years of strong experience in testing data integration workflows, validating data pipelines, and ensuring data integrity across complex systems. You will play a critical role in guaranteeing that data transformation processes meet performance, accuracy, and compliance requirements. --- ✅ Key Responsibilities: Design, develop, and execute ETL test plans, test scenarios, and SQL queries to validate data quality. Perform source-to-target data validation, transformation logic testing, and reconciliation. Collaborate with data engineers, business analysts, and developers to review requirements and ensure complete test coverage. Identify, document, and manage defects using tools like JIRA, Azure DevOps, or similar. Ensure data quality, completeness, and consistency across large-scale data platforms. Participate in performance testing and optimize data testing frameworks. Maintain and enhance automation scripts for recurring ETL validations (if applicable). --- 💡 Required Skills: 5+ years of hands-on experience in ETL testing and data validation. Strong SQL skills for writing complex queries, joins, aggregations, and data comparisons. Experience working with ETL tools (e.g., Informatica, Talend, DataStage, SSIS). Knowledge of Data Warehousing concepts and Data Modeling. Familiarity with data visualization/reporting tools (e.g., Tableau, Power BI – optional). Experience with Agile/Scrum methodologies. Strong analytical and problem-solving skills. --- ⭐ Nice to Have: Exposure to big data platforms (e.g., Hadoop, Spark). Experience with test automation tools for ETL processes. Cloud data testing experience (AWS, Azure, or GCP). Basic scripting (Python, Shell) for test automation. --- 🙌 Why Join Us? Work with a fast-paced, dynamic team that values innovation and data excellence. Competitive salary, flexible work hours, and growth opportunities. Engage in large-scale, cutting-edge data projects. --- 📩 To Apply: Send your resume to ABHISHEK.RAJ@APPZLOGIC.COM .

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Your Role 4+ years experience in development of reports in AxiomSL Controllers View 10 Good understanding on Data Sources, Data Models, Shorthand , Portfolios, Aggregations, Free Form , Tabular Reports and Workflow using AXIOM Controller view tool based on regulatory report requirement Solid understanding of SQL and ETL technologies 3+ years of experience in a financial institution ideally within Regulatory Reporting Excellent verbal/written communication and collaborative skills required Your Profile Experience in implementing Axiom solutions (US/UK Regulatory framework) Experience in Python and Shell scripting Database programming experience, ideally Sybase Axiom’s ASL language Understanding of Regulatory Landscape across various Jurisdictions, e.g. BOE/PRA/CCAR/Prime Reports Knowledge of software development best practices, including coding standards, code reviews, source control management, build process, continuous integration and continuous delivery Experience with Agile methodologies and development tools like Jira, GIT, Jenkins etc What You'll Love About Working Here We recognize the significance of flexible work arragemnets to provide support.Be it remote work, or flexible work hours. You will get an enviorment to maintain healthy work life balance. At the heart of our misssion is your career growth. our Array of career growth programs and diverse professions are crafted to spport you in exploring a world of opportuneties Euip Yourself with valulable certification in the latest technlogies such as unix,Sql.

Posted 3 weeks ago

Apply

0.0 - 7.0 years

0 Lacs

Bengaluru, Karnataka

On-site

About Us At Thoucentric, we offer end-to-end consulting solutions designed to address the most pressing business challenges across industries. Leveraging deep domain expertise, cutting-edge technology, and a results-driven approach, we help organizations streamline operations, enhance decision-making, and accelerate growth. We are headquartered in Bangalore with presence across multiple locations in India, US, UK, Singapore & Australia Globally. We help clients with Business Consulting, Program & Project Management, Digital Transformation, Product Management, Process & Technology Solutioning and Execution including Analytics & Emerging Tech areas cutting across functional areas such as Supply Chain, Finance & HR, Sales & Distribution across US, UK, Singapore and Australia. Our unique consulting framework allows us to focus on execution rather than pure advisory. We are working closely with marquee names in the global consumer & packaged goods (CPG) industry, new age tech and start-up ecosystem. We have been certified as "Great Place to Work" by AIM and have been ranked as "50 Best Firms for Data Scientists to Work For". We have an experienced consulting team of over 500+ world-class business and technology consultants based across six global locations, supporting clients through their expert insights, entrepreneurial approach and focus on delivery excellence. We have also built point solutions and products through Thoucentric labs using AI/ML in the supply chain space. Job Description Key Responsibilities Design, build and maintain Python plugins that encapsulate business rules and transformation logic such as custom aggregation/disaggregation, rule-based data enrichment, and time-grain manipulations. Implement data-cleaning, validation, mapping and merging routines using Pandas, NumPy and other Python libraries to prepare inputs for downstream analytics. Define clear interfaces and configuration schemes for plugins (for example, picklists for output grains and rule codes) and package them for easy consumption by implementation teams. Profile and optimize Python code paths—leveraging vectorized operations, efficient merges and in-memory transformations—to handle large datasets with low latency. Establish and enforce coding standards, write comprehensive unit and integration tests with pytest (or similar), and ensure high coverage for all new components. Triage and resolve issues in legacy scripts, refactor complex routines for readability and extensibility, and manage versioning across releases. Collaborate with data engineers and analysts to capture requirements, document plugin behaviors, configuration parameters, and usage examples in code repositories or internal wikis. Requirements Must-Have Skills 5–7 years of hands-on experience writing well-structured Python code, with deep familiarity in both object-oriented and functional programming paradigms. Expert mastery of Pandas and NumPy for transformations (group-by aggregations, merges, column operations), plus strong comfort with Python’s datetime and copy modules. Proven experience designing modular Python packages, exposing configuration through parameters or picklists, and managing versioned releases. Ability to translate complex, rule-driven requirements (such as disaggregation rules, external-table merges, and priority ranking) into clean Python functions and classes. Proficiency with pytest (or equivalent), mocking, and integrating with CI pipelines (e.g., GitHub Actions, Jenkins) for automated testing. Skilled use of Python’s logging module to instrument scripts, manage log levels, and capture diagnostic information. Strong Git workflow experience, including branching strategies, pull requests, code reviews, and merge management. Benefits What a Consulting role at Thoucentric will offer you? Opportunity to define your career path and not as enforced by a manager A great consulting environment with a chance to work with Fortune 500 companies and startups alike. A dynamic but relaxed and supportive working environment that encourages personal development. Be part of One Extended Family. We bond beyond work - sports, get-togethers, common interests etc. Work in a very enriching environment with Open Culture, Flat Organization and Excellent Peer Group. Be part of the exciting Growth Story of Thoucentric! Required Skills Python, Pandas, NumPy, Py... Python +6 Practice Name Data Science Date Opened 07/10/2025 Work Mode Hybrid Job Type Full time Industry Consulting Corporate Office Thoucentric, The Hive, Mahadevapura Zip/Postal Code 560048 City Bengaluru Country India State/Province Karnataka

Posted 3 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Frequence is the only end-to-end platform for media companies and agencies to grow and automate their advertising sales and operations while integrating owned and operated media. Through its full-stack proposal, workflow, and campaign-management software, Frequence drives revenue with best-in-class tools to sell, optimize, and report omnichannel advertising campaigns. Frequence is a Madhive Company. Madhive is the leading independent and fully customizable operating system built to help local media organizations build profitable, differentiated, and efficient businesses. Learn more about how Madhive and Frequence work together here. We are looking for an experienced Data Engineer to lead the end-to-end migration of our data analytics and reporting environment to Looker. This role will play a key part in designing scalable data models, translating business logic into LookML, and enabling teams across the organization with self-service analytics and actionable insights. You will work closely with stakeholders across data, engineering, and business teams to ensure a smooth, efficient transition to Looker, while establishing best practices for data modeling, governance, and dashboard development. What you’ll do: Lead the migration of existing BI tools, dashboards, and reporting infrastructure to Looker Design, develop, and maintain scalable and efficient LookML data models, including dimensions, measures, and explores Build and refine Looker dashboards and reports that are intuitive, actionable, and visually compelling Collaborate with data engineers and analysts to define semantic layers and ensure consistency across data sources Translate business requirements into technical specifications and LookML implementations Optimize SQL queries and LookML models for performance and scalability Implement and manage Looker’s security settings, permissions, and user roles in alignment with data governance standards Troubleshoot issues and support end users in their Looker adoption Maintain version control of LookML projects using Git Advocate for best practices in BI development, testing, and documentation Who you are: Proven experience with Looker and deep expertise in LookML syntax and functionality Hands-on experience building and maintaining LookML data models, explores, dimensions, and measures Strong SQL skills, including complex joins, aggregations, and performance tuning Experience working with semantic layers and data modeling for analytics Solid understanding of data analysis and visualization best practices Ability to create clear, concise, and impactful dashboards and visualizations Strong problem-solving skills and attention to detail in debugging Looker models and queries Familiarity with Looker’s security features and data governance principles Experience using version control systems, preferably Git Excellent communication skills and the ability to work cross-functionally Familiarity with modern data warehousing platforms (e.g., Snowflake, BigQuery, Redshift) Experience working in cloud environments such as AWS, GCP, or Azure (nice to have) Experience migrating from legacy BI tools (e.g., Tableau, Power BI, etc.) to Looker Experience working in agile data teams and managing BI projects Familiarity with dbt or other data transformation frameworks Why Frequence? Frequence is a dynamic, diverse, innovative, and friendly place to work. We embrace our differences and believe they fuel our creativity. We come from varied backgrounds and think that’s important. Whether it’s taking ideas from previous lives and applying them in different ways or creating something completely new, we are all trail-blazing team players who think big and want to make an impact. We are committed to cultivating a culture of inclusion and collaboration. We welcome diversity in education, culture, opinions, race, ethnicity, gender identity, veteran status, religion, disability, sexual orientation, and beliefs. Please be advised that we will NOT be using third-party recruiting agencies for this search.

Posted 3 weeks ago

Apply

2.0 - 3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Hi, We are hiring for Data Analyst Job roles and responsibilities : Define Problem & Solution Framework Develop & apply domain/process expertise Translate basic business problem statements into analysis requirements Influence and implement specified analytical approach Work with clients to define best output based on expressed stakeholder needs Write queries and output efficiently Data Acquisition Have in-depth knowledge of the data available in area of expertise Work with structured data in a traditional data storage environment Pull the data needed with standard query syntax; periodically identify more advanced methods of query optimization Cross-check pulled data against other published sources to determine fidelity Analysis/Insight Solve well-defined tasks with clear requirements and limited ambiguity Utilize basic data-manipulation tools Derive actionable recommendations from analysis that impact a process or team Convert data to make it analysis- ready through basic descriptive, aggregations, and pivots Communication/Influence Implement/deploy data visualization or communication tools (e.g., metrics dashboards, decks, flashes) Communicate clearly to stakeholders on project requirements and status Communicate analysis and work with business stakeholders to understand its value Project Management Manage expectations; prioritize own workload and communicate status Provide visibility and updates to manager regarding project timeline and deliverables Escalate problems and roadblocks as needed Technical Skill Requirements : Data Manipulation (Excel, SQL) Data Visualization (Tableau, Quicksight) Exp-2-3 years

Posted 3 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

FULL STACK WEB DEVELOPER INTERN ( MERN) We are seeking enthusiastic and skilled Full Stack Web Developer Intern in MERN who are passionate about building web applications from scratch. The ideal candidate should have worked on at least 1–2 real-time projects (personal or academic) using the MERN stack, showcasing hands-on development knowledge. This is a great opportunity to work closely with an experienced development team, understand client requirements, and build scalable, production-grade applications in a startup environment. Responsibilities: Develop and maintain web applications using MongoDB, Express.js, React.js, and Node.js. Build responsive and dynamic frontends using React and Tailwind CSS/Bootstrap. Design REST APIs and integrate frontend with backend efficiently. Collaborate with team members using Git and project management tools (like Trello/Jira). Work with authentication (JWT, OAuth) and role-based access controls. Debug issues, write clean code, and maintain proper documentation. Eligibility / Qualifications: Must be based in Chennai and available for a full-time, on-site internship. Hands-on experience with: React.js and Next.js (building components, routing, etc.) MongoDB – performing CRUD operations, aggregations and basic schema design REST API integration (using tools like Postman or Axios) Bonus if you’ve worked with: Redux for state management Backend routing with Node.js and Express.js Should have built at least one real-time project (personal, academic, or freelance) using the MERN/Next.js stack. Strong interest in learning, building, and debugging full-stack applications. Good understanding of Git, basic terminal commands, and general development workflows. Self-motivated, detail-oriented, and a good team player. Stipend : 5000/month We are currently not considering applications from: ❌ Students or candidates currently pursuing a degree ❌ Non Tamil Speaking candidates ❌ Candidates from non-technical backgrounds or unrelated domains ❌ Applicants with only theoretical knowledge and no real project experience ✅ Candidates are requested to apply only if you’ve completed a course or self-learning program (Udemy, YouTube, Coursera, or a reputed institute) and have built at least 3 full-stack projects on your own and your preferred mode of communication is Tamil.

Posted 3 weeks ago

Apply

7.0 years

0 Lacs

Ahmedabad, Gujarat, India

Remote

About Company : - PairSoft, a leading provider of spend management and accounts payable automation solutions, has announced the acquisition of APRO, a specialized software for purchase requisition and procurement management. This strategic move is designed to enhance PairSoft's capabilities and broaden its offerings to better serve its customer base. PairSoft is expanding operations of APRO in India through their new hire: ❖ https://pairsoft.com/ ❖ https://aprosoftwaresolutions.com/ Job Title: Senior Oracle PL/SQL Developer Location : India (Remote) Type : Full-time Experience Level : Senior (6yrs and above) About the Role: We are looking for a highly skilled and experienced Senior Oracle PL/SQL Developer to join our growing team. This role is ideal for someone who thrives on building scalable and efficient database solutions, writing high-quality SQL queries, and maintaining complex database logic in a collaborative, fast-paced environment. The ideal candidate will have a strong command of PL/SQL, Oracle database systems (12c or higher), and a solid understanding of data modelling and query optimization. You will play a critical role in designing and maintaining database procedures, triggers, and packages, while working closely with application developers and stakeholders. This position requires a proactive approach to troubleshooting, performance tuning, and delivering robust data solutions that support our enterprise-level systems. Experience in the finance domain or handling transactional systems is a plus. Key Responsibilities: Design, develop, and maintain robust PL/SQL stored procedures, triggers, functions, and packages. Optimize and refactor existing database logic to improve performance and scalability Write complex and efficient SQL queries for data retrieval, transformation, and reporting Work with Oracle databases (12c or higher) to ensure data integrity, security, and efficiency Collaborate with cross-functional teams, including application developers, QA engineers, and analysts. Monitor and resolve database performance issues, including query tuning and indexing strategies Support application deployment and troubleshoot production issues related to data processing Document technical designs, procedures, and maintenance plans for database components Stay current with Oracle features and PL/SQL enhancements to leverage modern capabilities Requirements: 7+ years of hands-on experience in Oracle PL/SQL development Deep understanding of Oracle EBS and Oracle financials cloud Strong proficiency in SQL, including complex joins, aggregations, and subqueries Deep understanding of relational database design and Oracle RDBMS architecture Experience with query performance tuning and database optimization techniques Solid understanding of transaction management, data consistency, and indexing Excellent problem-solving skills and attention to detail Strong communication skills with the ability to work across time zones and functions Nice to Have: Experience with Oracle partitioning, materialized views, and advanced analytics Exposure to ETL pipelines, data warehousing, or finance-related data processing Experience in Agile or SCRUM environments Basic knowledge of shell scripting or job scheduling tools (e.g., Control-M, Cron) Experience in Oracle 23AI What We Offer Opportunity to grow your career with a rapidly growing organisation Exposure to working with a Microsoft gold partner organization with the latest technologies People first organization culture Company Paid Group Mediclaim Insurance for employees, spouse and up to 2 Kids of INR 400,000 per annum. Company Paid Group Personal accidental insurance for employees of INR 1,000,000 per annum. Company Paid & Manager approved Career Advancement Opportunities Best-in-the-Industry referral policy 29 Paid leaves throughout the year About The Company We are a global team of innovators and advocates transforming how financial data is captured, stored, and manipulated with our comprehensive suite of automation technology. Our platform seamlessly integrates with your existing ERP for an unrivaled end-user experience. We do the heavy lifting so accounting, procurement, and fundraising teams can do their best work. PairSoft’s aspires to be the strongest procure-to-pay platform for the mid-market and enterprise, with close integration to Microsoft Dynamics, Blackbaud, Oracle, SAP, Acumatica and Sage ERPs. At PairSoft, we are passionate about innovation, transparency, diversity, and advocating on behalf of our customers and communities we support. We offer exciting career opportunities and a collaborative culture that allows individuals to learn, grow, and create meaningful impact. We are expanding and seeking team players who are eager to jump in and contribute to our rapid growth! PairSoft is proud to be an equal opportunity workplace. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status or any other protected status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you have a disability or special need that requires accommodation, please email us at: careers@pairsoft.com. To read our Candidate Data Privacy Notice - including GDPR - click here. Powered by JazzHR cX9kqulsFr

Posted 3 weeks ago

Apply

0 years

0 Lacs

India

On-site

Job Description: Solution Architect - Analytics (Snowflake) Role Summary: We are seeking a Solution Architect with a strong background in application modernization and enterprise data platform migrations. This role will lead the design and architecture of solutions for migrating from SAP HANA to Snowflake, ensuring scalability, performance, and alignment with business goals. Key Responsibilities: Provide solution architecture leadership & strategy development. Engage with BAs to translate functional and non-functional requirements into solution designs. Lead the overall design, including detailed configuration. Collaborate with Enterprise Architecture. Conduct thorough reviews of code and BRDs to ensure alignment with architectural standards, business needs, and technical feasibility. Evaluate system designs, ensuring scalability, security, and performance while adhering to best practices and organizational guidelines. Troubleshoot and resolve technical challenges encountered during coding, integration, and testing phases to maintain project timelines and quality. Strong expertise in data warehousing and data modelling Excellent communication, collaboration and presentation skills. Experience with ETL/ELT tools and processes, building complex pipelines and data ingestion. SQL Skillset needed: Should be able to write Advanced SQL, complex joins. Subqueries - correlated /non correlated, CTEs. Window functions. Aggregations - Group by, rollup, cube, pivot. Snowflake Skilled needed: Should be able to understand and write UDFs and stored procedures in snowflake. Have good understanding of Snowflake architecture, clustering , micro partitions, caching, virtual warehouse, stages, storage , security(row and column level security). Knowledge of snowflake features (Streams, time -travel, zero copy cloning, Snowpark and tasks). Provide expert recommendations on frameworks, tools, and methodologies to optimize development efficiency and system robustness. Performance tuning within Snowflake (Performance bottlenecks, materialized views, search optimization) Solution design- Ability to architect scalable, cost-effective snowflake solutions. Cost management - Monitor and optimize Snowflake credit usage and storage costs.

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies