Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Pune, Maharashtra, India
On-site
www.infobeans.com 1- Should have good experience with Azure technologies and SQL is a must. 2- Strong implementation knowledge of ADF, Azure Data bricks, Azure Synapse, and SQL. 3- Strong implementation knowledge of visualization tools like Power BI or Tableau. 4- Good understanding of data modeling, scaling, and transformations. 5- Well-versed in Data Warehousing implementation and concepts. 6- Strong knowledge of database and ETL methodologies. 7- Strong communication skills.
Posted 2 weeks ago
5.0 - 7.0 years
0 Lacs
Thiruvananthapuram
On-site
5 - 7 Years 1 Opening Trivandrum Role description Senior Data Streaming Engineer Build, and maintain a real-time, file-based streaming data platform leveraging open-source technologies. The ideal candidate will have experience with Kubernetes (K8s), Apache Kafka, and Java multithreading, and will be responsible for: • Developing a highly performant, scalable streaming architecture optimized for high throughput and low memory overhead • Implementing auto-scaling solutions to support variable data loads efficiently • Integrating reference data enrichment workflows using Snowflake • Ensuring system reliability and real-time processing across distributed environments • Collaborating with cross-functional teams to deliver robust, cloud-native data solutions • Build scalable and optimized ETL/ELT workflows leveraging Azure Data Factory (ADF) and Apache Spark within Databricks. Skills Azure,KAFKA,JAVA,KUBERENETES About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.
Posted 2 weeks ago
0 years
1 - 2 Lacs
Raurkela
On-site
1- 2 Yrs Faulty Dignosis with Parts replacement of All types of Printers Grad./Diploma How Many Types of Printer are there. DMP and Laser Printer Laser and MFP printer Knowledge on b/w 32 and 80 column. Experience on Laser scanner and ADF scanner Should be having Diagnosis if Paper is jam in Printer Printing blurr issue resoultion Issues related if printer is giving Blank printing Issues related if Printer is not picking the Paper if printer is not giving print even after sending print command from System, what are the problems
Posted 2 weeks ago
0 years
0 Lacs
India
On-site
Position Overview We are seeking a detail-oriented and highly skilled SQL Tester to join our Data & Analytics team. In this role, you will be responsible for Design and develop automated SQL test cases to validate data transformations and data quality. Validating data integrity, accuracy, and performance across our ETL pipelines and data platforms. You will work closely with data engineers, QA analysts, and DevOps teams to ensure high-quality data delivery in a CI/CD environment. ________________________________________ Key Responsibilities • Design and develop automated SQL test cases to validate data transformations and data quality. • Perform end-to-end testing of ETL processes, primarily using SSIS and other Microsoft BI tools. • Collaborate with developers and analysts to understand data requirements and ensure test coverage. • Integrate testing into CI/CD pipelines to support continuous delivery and deployment. • Identify, document, and track defects and inconsistencies in data. • Contribute to test automation strategies and reusable SQL test frameworks. ________________________________________ Required Skills & Qualifications • Strong experience in Automated SQL testing with a focus on data validation and transformation logic. • Proficiency in ETL testing, particularly with SSIS. • Experience working in CI/CD environments and integrating tests into automated pipelines. • Solid understanding of relational databases and data warehousing concepts. • Excellent analytical and problem-solving skills. ________________________________________ Nice to Have • Experience with tSQLt for unit testing SQL Server code. • Familiarity with Azure Data Factory (ADF) and cloud-based data integration. • Knowledge of SQL linting tools and best practices for SQL code quality. • Exposure to Agile/Scrum methodologies.
Posted 2 weeks ago
3.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: Job Accountabilities - Hands on Experience in Azure Data Components like ADF / Databricks / Azure SQL - Good Programming Logic Sense in SQL - Good PySpark knowledge for Azure Data Bricks - Data Lake and Data Warehouse Concept Understanding - Unit and Integration testing understanding - Good communication skill to express thoghts and interact with business users - Understanding of Data Security and Data Compliance - Agile Model Understanding - Project Documentation Understanding - Certification (Good to have) - Domain Knowledge Mandatory skill sets: Azure DE, ADB, ADF, ADL Preferred skill sets: Azure DE, ADB, ADF, ADL Years of experience required: 3 to 9 years Education qualification: Graduate Engineer or Management Graduate Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 2 weeks ago
3.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: Job Accountabilities - Hands on Experience in Azure Data Components like ADF / Databricks / Azure SQL - Good Programming Logic Sense in SQL - Good PySpark knowledge for Azure Data Bricks - Data Lake and Data Warehouse Concept Understanding - Unit and Integration testing understanding - Good communication skill to express thoghts and interact with business users - Understanding of Data Security and Data Compliance - Agile Model Understanding - Project Documentation Understanding - Certification (Good to have) - Domain Knowledge Mandatory skill sets: Azure DE, ADB, ADF, ADL Preferred skill sets: Azure DE, ADB, ADF, ADL Years of experience required: 3 to 9 years Education qualification: Graduate Engineer or Management Graduate Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Business Administration Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 2 weeks ago
0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description Senior Data Streaming Engineer Build, and maintain a real-time, file-based streaming data platform leveraging open-source technologies. The ideal candidate will have experience with Kubernetes (K8s), Apache Kafka, and Java multithreading, and will be responsible for: Developing a highly performant, scalable streaming architecture optimized for high throughput and low memory overhead Implementing auto-scaling solutions to support variable data loads efficiently Integrating reference data enrichment workflows using Snowflake Ensuring system reliability and real-time processing across distributed environments Collaborating with cross-functional teams to deliver robust, cloud-native data solutions Build scalable and optimized ETL/ELT workflows leveraging Azure Data Factory (ADF) and Apache Spark within Databricks. Skills Azure,KAFKA,JAVA,KUBERENETES
Posted 2 weeks ago
6.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description Role Proficiency: This role requires proficiency in data pipeline development including coding and testing data pipelines for ingesting wrangling transforming and joining data from various sources. Must be skilled in ETL tools such as Informatica Glue Databricks and DataProc with coding expertise in Python PySpark and SQL. Works independently and has a deep understanding of data warehousing solutions including Snowflake BigQuery Lakehouse and Delta Lake. Capable of calculating costs and understanding performance issues related to data solutions. Outcomes Act creatively to develop pipelines and applications by selecting appropriate technical options optimizing application development maintenance and performance using design patterns and reusing proven solutions.rnInterpret requirements to create optimal architecture and design developing solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code adhering to best coding standards debug and test solutions to deliver best-in-class quality. Perform performance tuning of code and align it with the appropriate infrastructure to optimize efficiency. Validate results with user representatives integrating the overall solution seamlessly. Develop and manage data storage solutions including relational databases NoSQL databases and data lakes. Stay updated on the latest trends and best practices in data engineering cloud technologies and big data tools. Influence and improve customer satisfaction through effective data solutions. Measures Of Outcomes Adherence to engineering processes and standards Adherence to schedule / timelines Adhere to SLAs where applicable # of defects post delivery # of non-compliance issues Reduction of reoccurrence of known defects Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirements Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). Average time to detect respond to and resolve pipeline failures or data issues. Number of data security incidents or compliance breaches. Outputs Expected Code Development: Develop data processing code independently ensuring it meets performance and scalability requirements. Define coding standards templates and checklists. Review code for team members and peers. Documentation Create and review templates checklists guidelines and standards for design processes and development. Create and review deliverable documents including design documents architecture documents infrastructure costing business requirements source-target mappings test cases and results. Configuration Define and govern the configuration management plan. Ensure compliance within the team. Testing Review and create unit test cases scenarios and execution plans. Review the test plan and test strategy developed by the testing team. Provide clarifications and support to the testing team as needed. Domain Relevance Advise data engineers on the design and development of features and components demonstrating a deeper understanding of business needs. Learn about customer domains to identify opportunities for value addition. Complete relevant domain certifications to enhance expertise. Project Management Manage the delivery of modules effectively. Defect Management Perform root cause analysis (RCA) and mitigation of defects. Identify defect trends and take proactive measures to improve quality. Estimation Create and provide input for effort and size estimation for projects. Knowledge Management Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release Management Execute and monitor the release process to ensure smooth transitions. Design Contribution Contribute to the creation of high-level design (HLD) low-level design (LLD) and system architecture for applications business components and data models. Customer Interface Clarify requirements and provide guidance to the development team. Present design options to customers and conduct product demonstrations. Team Management Set FAST goals and provide constructive feedback. Understand team members' aspirations and provide guidance and opportunities for growth. Ensure team engagement in projects and initiatives. Certifications Obtain relevant domain and technology certifications to stay competitive and informed. Skill Examples Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning of data processes. Expertise in designing and optimizing data warehouses for cost efficiency. Ability to apply and optimize data models for efficient storage retrieval and processing of large datasets. Capacity to clearly explain and communicate design and development aspects to customers. Ability to estimate time and resource requirements for developing and debugging features or components. Knowledge Examples Knowledge Examples Knowledge of various ETL services offered by cloud providers including Apache PySpark AWS Glue GCP DataProc/DataFlow Azure ADF and ADLF. Proficiency in SQL for analytics including windowing functions. Understanding of data schemas and models relevant to various business contexts. Familiarity with domain-related data and its implications. Expertise in data warehousing optimization techniques. Knowledge of data security concepts and best practices. Familiarity with design patterns and frameworks in data engineering. Additional Comments Data Engineering Role Summary: Skilled Data Engineer with strong Python programming skills and experience in building scalable data pipelines across cloud environments. The candidate should have a good understanding of ML pipelines and basic exposure to GenAI solutioning. This role will support large-scale AI/ML and GenAI initiatives by ensuring high-quality, contextual, and real-time data availability. ________________________________________ Key Responsibilities: Design, build, and maintain robust, scalable ETL/ELT data pipelines in AWS/Azure environments. Develop and optimize data workflows using PySpark, SQL, and Airflow. Work closely with AI/ML teams to support training pipelines and GenAI solution deployments. Integrate data with vector databases like ChromaDB or Pinecone for RAG-based pipelines. Collaborate with solution architects and GenAI leads to ensure reliable, real-time data availability for agentic AI and automation solutions. Support data quality, validation, and profiling processes. ________________________________________ Key Skills & Technology Areas: Programming & Data Processing: Python (4–6 years), PySpark, Pandas, NumPy Data Engineering & Pipelines: Apache Airflow, AWS Glue, Azure Data Factory, Databricks Cloud Platforms: AWS (S3, Lambda, Glue), Azure (ADF, Synapse), GCP (optional) Databases: SQL/NoSQL, Postgres, DynamoDB, Vector databases (ChromaDB, Pinecone) – preferred ML/GenAI Exposure (basic): Hands-on with Pandas, scikit-learn, knowledge of RAG pipelines and GenAI concepts Data Modeling: Star/Snowflake schema, data normalization, dimensional modeling Version Control & CI/CD: Git, Jenkins, or similar tools for pipeline deployment ________________________________________ Other Requirements: Strong problem-solving and analytical skills Flexible to work on fast-paced and cross-functional priorities Experience collaborating with AI/ML or GenAI teams is a plus Good communication and a collaborative, team-first mindset Experience in Telecom, E- Commerce, or Enterprise IT Operations is a plus. Skills ETL,BIGDATA,PYSPARK,SQL
Posted 2 weeks ago
12.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Orion Innovation is a premier, award-winning, global business and technology services firm. Orion delivers game-changing business transformation and product development rooted in digital strategy, experience design, and engineering, with a unique combination of agility, scale, and maturity. We work with a wide range of clients across many industries including financial services, professional services, telecommunications and media, consumer products, automotive, industrial automation, professional sports and entertainment, life sciences, ecommerce, and education. Experience: 12 years+ Job Summary: We are seeking a highly skilled and experienced Database Architect to design, implement, and maintain our database systems, with a focus on DB2, SQL Server, Azure SQL, Cosmos DB, and other cloud databases. The ideal candidate will have a deep understanding of database architecture, data modelling, and performance optimization. You will work closely with our development and IT teams to ensure our databases are scalable, secure, and efficient. Key Responsibilities Design and implement database solutions based on business requirements. Develop and maintain data models, database schemas, and data dictionaries. Optimize database performance through indexing, query optimization, and other techniques. Ensure data integrity, security, and availability. Collaborate with development teams to integrate databases with applications. Monitor and troubleshoot database issues, providing timely resolutions. Stay updated with the latest database technologies and best practices. Mentor and guide junior database administrators and developers. Specific Responsibilities Design and manage databases using DB2, SQL Server, Azure SQL, and other cloud database platforms. Implement and manage cloud-based database solutions, ensuring high availability and disaster recovery. Develop and maintain ETL processes using ADF for data integration and migration. Implement database security measures, including encryption and access controls. Perform database tuning and optimization for cloud environments. Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field. Proven experience as a Database Architect or similar role. Strong knowledge of DB2, SQL Server, Azure SQL, and other cloud database platforms. Proficiency in data modeling, database design, and performance tuning. Experience with cloud database solutions and services. Excellent problem-solving and analytical skills. Strong communication and collaboration abilities. Certification in database management (e.g., Microsoft Certified: Azure Database Administrator Associate) is a plus. Orion is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, creed, religion, sex, sexual orientation, gender identity or expression, pregnancy, age, national origin, citizenship status, disability status, genetic information, protected veteran status, or any other characteristic protected by law. Candidate Privacy Policy Orion Systems Integrators, LLC And Its Subsidiaries And Its Affiliates (collectively, “Orion,” “we” Or “us”) Are Committed To Protecting Your Privacy. This Candidate Privacy Policy (orioninc.com) (“Notice”) Explains What information we collect during our application and recruitment process and why we collect it; How we handle that information; and How to access and update that information. Your use of Orion services is governed by any applicable terms in this notice and our general Privacy Policy.
Posted 2 weeks ago
7.0 - 12.0 years
15 - 20 Lacs
Visakhapatnam
Work from Office
Dear Aspirant, Greetings from Miracle Software Systems Inc. (https://miraclesoft.com/) Job description for your insights. Kindly have a glance and help us out in sharing your updated resume along with details in line below the job description: Requirement Details : Position : Data Engineer Location : Visakhapatnam Experience : 6+ Experience Mode of opportunity : Permanent Job Description: Looking for a highly skilled Data Engineer with extensive experience in Microsoft Azure, particularly with ADF and Fabric pipeline development, and a strong understanding of the Medallion Architecture (Bronze, Silver, Gold layers). The ideal candidate will be responsible for designing and optimizing end-to-end data pipelines across Lake houses and Warehouses in Microsoft Fabric, and will work closely with business and engineering teams to define scalable, governed data models. Responsibilities: Develop and manage complex data pipelines using Azure Data Factory (ADF) and Microsoft Fabric. Implement and maintain Medallion Architecture layers (Bronze, Silver, Gold). Design governed, scalable data models tailored to business requirements. Develop and optimize PySpark-based data processing for large-scale data transformations. Integrate with reporting tools such as Power BI for seamless data visualization. Ensure robust data governance, security, and performance in large-scale Fabric deployments. Strong expertise in Azure Data Factory (ADF) and Microsoft Fabric Hands-on experience with OneLake, Lakehouse Explorer, and Power BI integration Solid understanding of data governance, security, and performance tuning SAP knowledge is required Proficiency in PySpark is mandatory
Posted 2 weeks ago
0 years
0 Lacs
India
Remote
**********************************6 Months contract opportunity********************************** Position is 100% remote - can overlap 3-5 hours with CST time zone. Skill Priority: SQL testing, ETL, SSIS, and CICD experience Nice to Haves": tSQLt, ADF, and SQL Linting experience Role: Seeking candidates available to start immediately, not seeking candidates with lengthy notice periods. Candidate location and candidate availability/start date should be listed at the top of the resumes. Position Overview: We are seeking a detail-oriented and highly skilled SQL Tester to join our Data & Analytics team. In this role, you will be responsible for Design and develop automated SQL test cases to validate data transformations and data quality. Validating data integrity, accuracy, and performance across our ETL pipelines and data platforms. You will work closely with data engineers, QA analysts, and DevOps teams to ensure high-quality data delivery in a CI/CD environment. Key Responsibilities: Design and develop automated SQL test cases to validate data transformations and data quality. Perform end-to-end testing of ETL processes, primarily using SSIS and other Microsoft BI tools. Collaborate with developers and analysts to understand data requirements and ensure test coverage. Integrate testing into CI/CD pipelines to support continuous delivery and deployment. Identify, document, and track defects and inconsistencies in data. Contribute to test automation strategies and reusable SQL test frameworks. Required Skills & Qualifications: Strong experience in Automated SQL testing with a focus on data validation and transformation logic. Proficiency in ETL testing, particularly with SSIS. Experience working in CI/CD environments and integrating tests into automated pipelines. Solid understanding of relational databases and data warehousing concepts. Excellent analytical and problem-solving skills. Nice to Have: Experience with tSQLt for unit testing SQL Server code. Familiarity with Azure Data Factory (ADF) and cloud-based data integration. Knowledge of SQL linting tools and best practices for SQL code quality. Exposure to Agile/Scrum methodologies.
Posted 2 weeks ago
5.0 years
0 Lacs
Kochi, Kerala, India
On-site
Valorem Reply is looking for a Data Engineer with experience building and contributing to the design of database systems, both normalized transactional systems and dimensional reporting systems. Strong experience with SQL Server as a database engine, as well as Microsoft and Databricks technology experience implementing Big Data with Advanced Analytics solutions. Successful candidates will have experience and skill in providing solutions for storing, retrieving, transforming and aggregating data to support line of business applications as well as reporting systems. The Data Engineer will work with customers to deliver solutions utilizing strong business, technical and data modelling skills. This position will represent Valorem's approach to data visualization and information delivery solutions and as such must demonstrate proficiency with Power BI and Azure Data Services. Key Responsibilities Design and implement general architecture for complex data systems Translate business requirements into functional and technical specifications Design and implement lakehouse architecture Develop and manage cloud-based data architecture and reporting solutions Apply data modelling principles for relational and dimensional data structures Design Data Warehouses following established principles (e.g.,Kimball, Inmon) Create and manage source-to-target mappings for ETL/ELT processes Mentor junior engineers and contribute to architectural decisions and code reviews Minimum Qualifications Bachelor’s degree in computer science, Computer Engineering, MIS, or related field 5+years of experience with Microsoft SQL Server and strong proficiency in T- SQL, SQL performance tuning (Indexing, Structure, Query Optimization) 5+years of experience in Microsoft data platform development and implementation 5+years of experience with PowerBI or other competitive technologies 3+years of experience in consulting, with a focus on analytics and data solutions 2+ years of experience with Databricks, including Unity Catalog, Databricks SQL, Workflows, and Delta Sharing Proficiency in Python and Apache Spark Develop and manage Databricks notebooks for data transformation, exploration, and model deployment Expertise in Microsoft Azure services, including Azure SQL, Azure Data Factory (ADF), Azure Data Warehouse (Synapse Analytics), Azure Data Lake, and Stream Analytics Experience with Microsoft Fabric Familiarity with CI/CD pipelines and infrastructure-as-code tools like Terraform or Azure Resource Manager (ARM) Knowledge of taxonomies, metadata management, and masterdata management Familiarity with data stewardship, ownership, and data quality management Expertise in Big Data technologies and tools: BigData Platforms: HDFS, MapReduce, Pig, Hive General DBMS experience with Oracle, DB2, MySQL, etc No SQL databases such as HBase, Cassandra, DataStax, MongoDB, CouchDB, etc Experience with non-Microsoft reporting and BI tools, such as Qlik, Cognos, MicroStrategy, Tableau, etc About Reply Reply specializes in the design and implementation of solutions based on new communication channels and digital media. Reply is a network of highly specialized companies supporting global industrial groups operating in the telecom and media, industry and services, banking, insurance and public administration sectors in the definition and development of business models enabled for the new paradigms of AI, cloud computing, digital media and the Internet of Things. Reply services include Consulting, System Integration and Digital Services. www.reply.com
Posted 2 weeks ago
0 years
0 Lacs
India
Remote
Location: India based, remote work. Time: 10:30 AM – 6:30 PM IST (Aligned to US Central Time) Rate: $20 - $25/h About Data Meaning Data Meaning is a front-runner in Business Intelligence and Data Analytics consulting, renowned for our high-quality consulting services throughout the US and LATAM. Our expertise lies in delivering tailored solutions in Business Intelligence, Data Warehousing, and Project Management. We are dedicated to bringing premier services to our diverse clientele. We have a global team of 95+ consultants, all working remotely, embodying a collaborative, inclusive, and innovative-driven work culture. Position Summary: We are seeking a proactive Data Pipeline Support Engineer based in India to work the 12:00–8:00 AM Central Time (US) shift The ideal candidate must have strong hands-on experience with Azure Data Factory, Alteryx (including reruns and macros), and dbt. This role requires someone detail-oriented, capable of independently managing early-morning support for critical workflows, and comfortable collaborating across time zones in a fast-paced data operations environment. Responsibilities: Monitor and support data jobs in Alteryx, dbt, Snowflake, and ADF during 12:00–8:00 AM CT Perform first-pass remediation (e.g., reruns, credential resets, basic troubleshooting, etc) Escalate unresolved or complex issues to nearshore Tier 2/3 support teams Log all incidents and resolutions in ticketing and audit systems (ServiceNow) Collaborate with CT-based teams for smooth cross-timezone handoffs Contribute to automation improvements (e.g., Alteryx macros, retry logic) Required Skills & Qualifications: Strong, hands-on experience in Alteryx (monitoring, reruns, macros) Working knowledge of dbt (Data Build Tool) and Snowflake (basic SQL, Snowpark, data validation) Experience with Azure Data Factory (ADF) pipeline executions Familiarity with SAP BW, workflow chaining, and cron-based job scheduling Familiarity with data formats, languages, protocols, and architecture styles required to provide Azure-based integration solutions (for example,. NET, JSON, REST, and SOAP) including Azure Functions Excellent communication skills in English (written and verbal) Ability to work independently and handle incident resolution with limited documentation Required Certifications: Alteryx Designer Core Certification SnowPro Core Certification dbt Fundamentals Course Certificate Microsoft Certified: Azure Data Engineer Associate ITIL v4 Foundation (or equivalent) Preferred Certifications: Alteryx Designer Advanced Certification Alteryx Server Administration SnowPro Advanced: Data Engineer dbt Analytics Engineering Certification Microsoft Certified: Azure Fundamentals (AZ-900) Powered by JazzHR x64iLZrxTq
Posted 2 weeks ago
0 years
0 Lacs
India
On-site
Position Overview We are seeking a detail-oriented and highly skilled SQL Tester to join our Data & Analytics team. In this role, you will be responsible for Design and develop automated SQL test cases to validate data transformations and data quality. Validating data integrity, accuracy, and performance across our ETL pipelines and data platforms. You will work closely with data engineers, QA analysts, and DevOps teams to ensure high-quality data delivery in a CI/CD environment. Key Responsibilities Design and develop automated SQL test cases to validate data transformations and data quality. Perform end-to-end testing of ETL processes, primarily using SSIS and other Microsoft BI tools. Collaborate with developers and analysts to understand data requirements and ensure test coverage. Integrate testing into CI/CD pipelines to support continuous delivery and deployment. Identify, document, and track defects and inconsistencies in data. Contribute to test automation strategies and reusable SQL test frameworks. Required Skills & Qualifications Strong experience in Automated SQL testing with a focus on data validation and transformation logic. Proficiency in ETL testing, particularly with SSIS. Experience working in CI/CD environments and integrating tests into automated pipelines. Solid understanding of relational databases and data warehousing concepts. Excellent analytical and problem-solving skills. Nice to Have Experience with tSQLt for unit testing SQL Server code. Familiarity with Azure Data Factory (ADF) and cloud-based data integration. Knowledge of SQL linting tools and best practices for SQL code quality. Exposure to Agile/Scrum methodologies. Skill Priority: SQL testing, ETL, SSIS, and CICD experience Nice to Haves: tSQLt, ADF, and SQL Linting experienc
Posted 2 weeks ago
8.0 years
0 Lacs
India
Remote
Title: SQL Data Tester-(ETL/SSIS/ADF)- 8+ Years Exp Location: India -Remote-Offshore (CST Hours) Duration: 6 months Start time: 2 weeks or less Skill Priority: SQL testing, ETL, SSIS, and CICD experience "Nice to Haves": tSQLt, ADF, and SQL Linting experience Seeking candidates available to start immediately, not seeking candidates with lengthy notice periods. We are seeking a detail-oriented and highly skilled SQL Tester to join our Data & Analytics team. In this role, you will be responsible for Design and develop automated SQL test cases to validate data transformations and data quality. Validating data integrity, accuracy, and performance across our ETL pipelines and data platforms. You will work closely with data engineers, QA analysts, and DevOps teams to ensure high-quality data delivery in a CI/CD environment. ________________________________________ Key Responsibilities • Design and develop automated SQL test cases to validate data transformations and data quality. • Perform end-to-end testing of ETL processes, primarily using SSIS and other Microsoft BI tools. • Collaborate with developers and analysts to understand data requirements and ensure test coverage. • Integrate testing into CI/CD pipelines to support continuous delivery and deployment. • Identify, document, and track defects and inconsistencies in data. • Contribute to test automation strategies and reusable SQL test frameworks. ________________________________________ Required Skills & Qualifications • Strong experience in Automated SQL testing with a focus on data validation and transformation logic. • Proficiency in ETL testing, particularly with SSIS. • Experience working in CI/CD environments and integrating tests into automated pipelines. • Solid understanding of relational databases and data warehousing concepts. • Excellent analytical and problem-solving skills. ________________________________________ Nice to Have • Experience with tSQLt for unit testing SQL Server code. • Familiarity with Azure Data Factory (ADF) and cloud-based data integration. • Knowledge of SQL linting tools and best practices for SQL code quality. • Exposure to Agile/Scrum methodologies.
Posted 2 weeks ago
0 years
0 Lacs
India
Remote
Hi Hope you are doing well Please look at below mentioned Job Description and share your updated resume and mention your work authorization & Current Location . If the JD match with your skills. Title : SQL Data Tester Location: Remote Job type: Long Term Contract Position is 100% remote. Consultants working hours must overlap 3-5 hours with CST time zone. Job Description: Skill Priority: SQL testing, ETL, SSIS, and CICD experience "Nice to Haves": tSQLt, ADF, and SQL Linting experience Seeking candidates available to start immediately, not seeking candidates with lengthy notice periods. Candidate location and candidate availability/start date should be listed at the top of the resumes. We are seeking a detail-oriented and highly skilled SQL Tester to join our Data & Analytics team. In this role, you will be responsible for Design and develop automated SQL test cases to validate data transformations and data quality. Validating data integrity, accuracy, and performance across our ETL pipelines and data platforms. You will work closely with data engineers, QA analysts, and DevOps teams to ensure high-quality data delivery in a CI/CD environment. ________________________________________ Key Responsibilities • Design and develop automated SQL test cases to validate data transformations and data quality. • Perform end-to-end testing of ETL processes, primarily using SSIS and other Microsoft BI tools. • Collaborate with developers and analysts to understand data requirements and ensure test coverage. • Integrate testing into CI/CD pipelines to support continuous delivery and deployment. • Identify, document, and track defects and inconsistencies in data. • Contribute to test automation strategies and reusable SQL test frameworks. ________________________________________ Required Skills & Qualifications • Strong experience in Automated SQL testing with a focus on data validation and transformation logic. • Proficiency in ETL testing, particularly with SSIS. • Experience working in CI/CD environments and integrating tests into automated pipelines. • Solid understanding of relational databases and data warehousing concepts. • Excellent analytical and problem-solving skills. ________________________________________ Nice to Have • Experience with tSQLt for unit testing SQL Server code. • Familiarity with Azure Data Factory (ADF) and cloud-based data integration. • Knowledge of SQL linting tools and best practices for SQL code quality. • Exposure to Agile/Scrum methodologies. Thanks & Regards Pradeep Pathak Senior Technical Recruiter KPG99 Inc. |http://www.kpg99.com/ Direct: 609-325-2290 Email id: pradeepk@kpgtech.com / pradeep1@kpg.in LinkedIn: https://www.linkedin.com/in/pradeep-pathak-5026a016a/ Address: 3240 E State St EXT , Hamilton, NJ 08619
Posted 2 weeks ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Good experience in building data pipelines using ADF Good experience with programming languages such as Python, PySpark Solid proficiency in SQL and complex queries Demonstrated ability to learn and adapt to new data technologies Proven good skills in Azure Data Processing like Azure data Factory and Azure data Bricks Proven good problem-solving skills Proven good communication skills Proven technical skills - Python, Azure Data Processing Tools Other Requirements Collaborate with team, architects, and product stakeholders to understand the scope and design of a deliverable Participate in product support activities as needed by the team Understand product architecture, features being built and come up with product improvement ideas and POCs Individual contributor for Data Engineering - Data pipelines, Data modelling and Data warehouse Preferred Qualifications Knowledge or experience with Containerization - Docker, Kubernetes. Knowledge or experience with Bigdata or Hadoop ecosystem - Spark, Hive, HBase, Sqoop etc. Experience with APIs and integrating external data sources. Experience in Build or Deployment Automation - Jenkins Knowledge or experience using Microsoft Visio, Power Point. Knowledge on Agile or Scrum At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.
Posted 2 weeks ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Azure Cloud – Technology Assurance As Risk Assurance Senior, you’ll contribute technically to Risk Assurance client engagements and internal projects. An important part of your role will be to assist fellow Seniors & Managers while actively participating within the client engagement Similarly, you’ll anticipate and identify risks within engagements and share any issues with senior members of the team. In line with EY commitment to quality, you’ll confirm that work is of high quality and is reviewed by the next-level reviewer. As a member of the team, you’ll help to create a positive learning culture and assist fellow team members while delivering an assignment. The opportunity We’re looking for professional having at least 3 years or more of experience. You’ll be part of a cross-functional team that’s responsible for the full software development life cycle, from conception to deployment. This is a fantastic opportunity to be part of a leading firm whilst being instrumental in the growth of a new service offering. Skills and Summary of Accountabilities: Designing, architecting, and developing solutions leveraging Azure cloud to ingest, process and analyse large, disparate data sets to exceed business requirements. Proficient in Azure Data Factory, Azure Databricks, Azure SQL Database, Azure Synapse Analytics and Azure Data Lake Storage for data storage and processing. Designed data pipelines using these technologies. Working knowledge of Data warehousing/Modelling ETL/ELT pipelines, Data Democratization using cloud services. Design, build and maintain efficient, reusable, and reliable code ensuring the best possible performance, quality, and responsiveness of applications using reliable Python code. Automate tasks through Python scripting, databases, and other advanced technologies like databricks, synapse studia, ADF etc Exposure working in client facing roles, collaborate with cross functional teams including internal audits, IT security and business stakeholders to assess control effectiveness and facilitate remediation activities. Preferred knowledge/understanding of experience in IT Controls, Risk and Compliance. Design IT Risk Controls framework such as IT SOX. Testing of internal controls such as IT general controls, IT application controls, IPE related controls, interface controls etc. To qualify for the role, you must have. 3 years of experience in building end-to-end business solutions using big data and data engineering. Expertise in core Microsoft Azure Data Services (e.g., Azure Data Factory, Azure Databricks, Azure Synapse, Azure SQL, Data Lake services etc). Familiar with integrating services, Azure Logic apps, Function Apps, Stream analytics, Triggers, Event hubs etc. Expertise in Cloud related Big Data integration and infrastructure tech stack using Azure Databricks & Apache Spark framework. Must have the following: Python, SQL and preferred R, Scala. Experience developing software tools using utilities, pandas, NumPy and other libraries/components etc. Hands-on expertise in using Python frameworks (like Django, Pyramid, Flask). Preferred substantial background in data extraction and transformation, developing data pipelines using MS SSIS, Informatica, Talend or any other on-premises tools. Preferred knowledge on Power BI or any BI tools. Should have good understanding of version controlling with Git, JIRA, Change / Release management, build/deploy, CI/CD Azure Devops. Ideally, you'll also have Bachelor's Degree or above in mathematics, information systems, statistics, computer science, data analytics or related disciplines. Experience with AI/ML is a plus. Preferred Certification in DP-203 Azure Data Engineer or any other. Ability to communicate clearly and concisely and use strong writing and verbal skills to communicate facts, figures, and ideas to others. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 2 weeks ago
6.0 - 8.0 years
1 - 6 Lacs
Noida
Work from Office
Urgent Hiring... Microsoft Fabric Cloud Architect 6-8yrs Noida Immediate to 30 days Skills- Azure Cloud, MS Fabric, Py spark, DAX, Python, Azure Synapse, ADF, Data Bricks, MS-Fabric, ETL Pipelines.
Posted 2 weeks ago
6.0 - 9.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Hi all, This is an exciting career opportunity for MSBI developer position. Please find the JD below, Experience on MSBI, SSIS, SSRS, SQL Server Experience on ADF or Azure Location - Pune, Bangalore, Hyderabad, Chennai Experience - 6-9 years Notice period - Immediate or max 10days If you are interested , please share your profile to jeyaramya.rajendran@zensar.com
Posted 2 weeks ago
3.0 - 8.0 years
20 - 25 Lacs
Bengaluru
Hybrid
About Quantzig: Quantzig is a global analytics and advisory firm with offices in the US, UK, Canada, China, and India. we have assisted our clients across the globe with end-to-end advanced analytics, visual storyboarding , Machine Learning and data engineering solutions implementation for prudent decision making. We are a rapidly growing organization that is built and operated by high-performance champions If you have what it takes to be the champion with business and functional skills to take ownership of an entire project end-to-end, help build a team with great work ethic and a drive to learn, you are the one were looking for. The clients love us for our solutioning capability, our enthusiasm and we expect you to be a part of our growth story. Company Website: https://www.quantzig.com/ Looking for Immediate Joiners Job Title: Hybrid BI + SAP Purpose of the Role We are looking for a technically strong and business-savvy Data Engineering & Analytics Specialist to support critical analytics workflows built on Azure and Databricks. The ideal candidate will bring hands-on experience in SQL, SAP data extraction, PySpark scripting, and pipeline orchestration via ADF, while also collaborating with business stakeholders to drive value from data. This is a fast-paced role that demands ownership, agility, and end-to-end delivery mindset. Key Responsibilities Design, develop, and maintain robust data pipelines using Azure Data Factory and Databricks (PySpark) . Extract, transform, and load data from SAP systems and other enterprise sources, ensuring reliability and performance. Write complex SQL queries and optimize data processing workflows for analytical consumption. Work with business stakeholders to translate functional requirements into technical implementations. Support ad-hoc analyses and dashboard development in Excel and Power BI , enabling self-service and visualization. Ensure data governance, validation, and documentation across workflows. Collaborate with cross-functional teams across Data Engineering, BI, and Analytics. Contribute to process automation, pipeline scalability, and performance improvements. Own project delivery timelines and drive issue resolution with minimal supervision. Required Qualifications 35 years of relevant experience in data engineering or advanced analytics roles. Strong expertise in SQL for data manipulation and business rule logic. Hands-on experience working with SAP data structures, extractions, and integration. Strong command over Databricks (PySpark) and ADF for data orchestration and transformation. Proficient in Python for scripting and automation. Good understanding of Azure platform services , especially related to data storage and compute. Intermediate proficiency in Excel and Power BI for reporting and stakeholder engagement. Strong verbal and written communication skills ; ability to gather requirements and explain solutions to non-technical stakeholders. Demonstrated experience in managing or leading project deliverables independently . Preferred Qualifications Experience working in notebook-based environments like Azure Databricks or Jupyter. Familiarity with DevOps, Git-based version control , and CI/CD pipelines for data workflows. Exposure to data warehousing concepts and building scalable ETL frameworks. Previous experience in a client-facing or delivery management role.
Posted 2 weeks ago
2.0 - 7.0 years
0 Lacs
India
On-site
WHO WE ARE Sapaad is a global leader in all-in-one unified commerce platforms, dedicated to delivering world-class software solutions. Its flagship product, Sapaad, has seen tremendous success in the last decade, with thousands of customers worldwide, and many more signing on. Driven by a team of passionate developers and designers, Sapaad is constantly innovating, introducing cutting-edge features that reshape the industry. Headquartered in Singapore, with offices across five countries, Sapaad is backed by technology veterans with deep expertise in web, mobility, and e commerce, making it a key player in the tech landscape. THE OPPORTUNITY Sapaad PTE LTD is seeking a Data Engineer who will take charge of constructing our distributed processing and big data infrastructure, as well as the accompanying applications and tools. We're looking for someone with a fervent drive to tackle intricate data challenges and collaborate closely with the data team, all while staying abreast of the latest features and tools in Big Data and Data Science. The successful candidate will play a pivotal role in supporting software developers, data architects, data analysts, and data scientists in various data initiatives. He/She will ensure the smooth and efficient delivery of data across ongoing projects. We require an individual who is self-directed and capable of adeptly managing the data requirements of multiple teams, systems, and products. ROLES AND RESPONSIBILITIES Create and maintain optimal data pipeline architecture. Assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into key business performance metrics. Work with stakeholders including the Executive, Product, Data Architects, and Design teams to assist with data-related technical issues and support their data infrastructure needs. To keep the data separated and secure. Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to strive for greater functionality in our data systems. ROLE REQUIREMENTS Candidate with 2 to 7 years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, IT or Statistics, or another quantitative field. Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Experience building and optimizing ‘big data’ data pipelines, architectures, and data sets. Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency, and workload management. Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores. Good communication skills and team player. Experience supporting and working with cross-functional teams in a dynamic environment. Preferred to have experience using the following software/tools: Experience with big data tools: Hadoop, Spark, Kafka, etc. Experience with relational SQL and NoSQL databases. Experience with data pipeline / ETL tools like Informatica, DataStage, and with any cloud tool like Azure Data Factory (ADF), etc. Experience with data engineering on cloud services like Azure, AWS, or GCP. Experience with stream-processing systems: Storm, Spark-Streaming, etc. Experience with object-oriented/object function scripting languages: Python, Java, Scala, etc.
Posted 2 weeks ago
7.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) – Manager – Azure Data Architect As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We’re looking for Managers (Big Data Architects) with strong technology and data understanding having proven delivery capability. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team. Your Key Responsibilities Develop standardized practices for delivering new products and capabilities using Big Data & cloud technologies, including data acquisition, transformation, analysis, Modelling, Governance & Data management skills Interact with senior client technology leaders, understand their business goals, create, propose solution, estimate effort, build architectures, develop and deliver technology solutions Define and develop client specific best practices around data management within a cloud environment Recommend design alternatives for data ingestion, processing and provisioning layers Design and develop data ingestion programs to process large data sets in Batch mode using ADB, ADF, PySpark, Python, Snypase Develop data ingestion programs to ingest real-time data from LIVE sources using Apache Kafka, Spark Streaming and related technologies Have managed team and have experience in end to end delivery Have experience of building technical capability and teams to deliver Skills And Attributes For Success Strong understanding & familiarity with all Cloud Ecosystem components Strong understanding of underlying Cloud Architectural concepts and distributed computing paradigms Experience in the development of large scale data processing. Hands-on programming experience in ADB, ADF, Synapse, Python, PySpark, SQL Hands-on expertise in cloud services like AWS, and/or Microsoft Azure eco system Solid understanding of ETL methodologies in a multi-tiered stack with Data Modelling & Data Governance Experience with BI, and data analytics databases Experience in converting business problems/challenges to technical solutions considering security, performance, scalability etc. Experience in Enterprise grade solution implementations. Experience in performance bench marking enterprise applications Strong stakeholder, client, team, process & delivery management skills To qualify for the role, you must have Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Ability to multi-task under pressure and work independently with minimal supervision. Strong verbal and written communication skills. Must be a team player and enjoy working in a cooperative and collaborative team environment. Adaptable to new technologies and standards. Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support. Minimum 7 years hand-on experience in one or more of the above areas. Minimum 10 years industry experience Ideally, you’ll also have Project management skills Client management skills Solutioning skills What We Look For People with technical experience and enthusiasm to learn new things in this fast-moving environment What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 2 weeks ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Functional Responsibility Having sound knowledge of banking domain (Wholesale, retail, core banking, trade finance) In-depth understanding of RBI Regulatory reporting and guidelines including RBI ADF approach document. Should have experience in handling various important regulatory returns like Form- A, Form VIII (SLR), Form X, BSR, SFR (Maintenance of CRR) ,DSB Returns, Forex, Priority sector lending related returns to RBI Should have an understanding of balance sheet and P&L. Supporting clients by providing user manuals, trainings, conducting workshops and preparing case studies. Process Adherence Review the initial and ongoing development of product Responsible for documenting, validating, communicating and coordinating requirements. Provide support to business development by preparing proposals, concept presentations and outreach activities Maintaining and updating tracker, reviewing test cases, providing training to internal as well as external stakeholders Client Management / Stakeholder Management Interact with clients in relation to assignment execution and manage operational relationships effectively Interact with client for requirement gathering, issue tracking, change request discussion, FRD writing and preparing project status reports People Development Co-ordinate with assignment-specific team of consultants, developers, QA and monitor performance to ensure timely and effective delivery
Posted 2 weeks ago
7.0 years
0 Lacs
Kochi, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) – Manager – Azure Data Architect As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We’re looking for Managers (Big Data Architects) with strong technology and data understanding having proven delivery capability. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team. Your Key Responsibilities Develop standardized practices for delivering new products and capabilities using Big Data & cloud technologies, including data acquisition, transformation, analysis, Modelling, Governance & Data management skills Interact with senior client technology leaders, understand their business goals, create, propose solution, estimate effort, build architectures, develop and deliver technology solutions Define and develop client specific best practices around data management within a cloud environment Recommend design alternatives for data ingestion, processing and provisioning layers Design and develop data ingestion programs to process large data sets in Batch mode using ADB, ADF, PySpark, Python, Snypase Develop data ingestion programs to ingest real-time data from LIVE sources using Apache Kafka, Spark Streaming and related technologies Have managed team and have experience in end to end delivery Have experience of building technical capability and teams to deliver Skills And Attributes For Success Strong understanding & familiarity with all Cloud Ecosystem components Strong understanding of underlying Cloud Architectural concepts and distributed computing paradigms Experience in the development of large scale data processing. Hands-on programming experience in ADB, ADF, Synapse, Python, PySpark, SQL Hands-on expertise in cloud services like AWS, and/or Microsoft Azure eco system Solid understanding of ETL methodologies in a multi-tiered stack with Data Modelling & Data Governance Experience with BI, and data analytics databases Experience in converting business problems/challenges to technical solutions considering security, performance, scalability etc. Experience in Enterprise grade solution implementations. Experience in performance bench marking enterprise applications Strong stakeholder, client, team, process & delivery management skills To qualify for the role, you must have Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Ability to multi-task under pressure and work independently with minimal supervision. Strong verbal and written communication skills. Must be a team player and enjoy working in a cooperative and collaborative team environment. Adaptable to new technologies and standards. Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support. Minimum 7 years hand-on experience in one or more of the above areas. Minimum 10 years industry experience Ideally, you’ll also have Project management skills Client management skills Solutioning skills What We Look For People with technical experience and enthusiasm to learn new things in this fast-moving environment What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough