Home
Jobs
Companies
Resume

1032 Adf Jobs - Page 6

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

20.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Linkedin logo

We are seeking an experienced Enterprise Architect with expertise in SAP ECC and Success Factors to lead the development and maintenance of our enterprise architecture strategy. This strategic role involves collaborating with stakeholders, aligning technology with business needs, and ensuring scalable, secure, and efficient enterprise-level implementations. About About RWS Technology Services – India RWS Technology Services provide end-to-end business technology solutions. Our team of experts provides a wide portfolio of services around digital technologies and technology operations to help organizations stay ahead of the curve, lower their total cost of ownership, and improve efficiencies. How we help - RWS Technology Services offer state-of-the-art technology solutions across the product lifecycle management process – all the way from consulting, concept, design, development to maintenance and optimization. We specialize in helping companies excel in the global, fast-paced technology landscape by supporting them in every aspect of customer interaction: Globalization, Digitization, Customer Experiences Management, Business Processes Automation, and Technology Infrastructure Modernization. Why choose RWS? - Innovative: RWS understands the needs of our customers to use the best talent, latest technologies, and solutions to help create connected customer experiences. We help our clients differentiate themselves by making their product engineering capabilities more data driven, powered by AI, and supported by cloud services and intelligent edge devices. Tailored: RWS Technology Services has been delivering technology services and solutions to start-ups, mid-sized and Fortune 500 corporations for over 20 years now. Our technology experience across all key industries ensures tailored applications development to meet the unique business needs of our clients. Our group is led by dedicated on-shore and off-shore project management teams of highly experienced professionals specializing in both agile and waterfall methodologies. We understand complex technology deployments and have a proven record to manage business critical, time-sensitive, and highly secure deployments that scale with your business growth. Key Responsibilities Job Overview Define and maintain the enterprise architecture strategy and roadmap. Collaborate with stakeholders to translate business requirements into scalable technical solutions. Ensure alignment with industry standards, IT best practices, and security frameworks. Design and implement secure, scalable, and high-performing enterprise solutions. Evaluate emerging technologies and recommend adoption where beneficial. Establish and enforce technical standards, policies, and best practices. Provide architectural guidance to development teams for optimal solution design. Ensure solutions align with business continuity and disaster recovery plans. Skills & Experience RWS is looking for 15+ years of relevant experience candidates, Who can join us as a Part time/Freelancer/Contract. Bachelor’s degree in Computer Science, Information Technology, or a related field. 15+ years of experience in technology architecture, including 5+ years in an enterprise architect role. Strong expertise in SAP ECC and SuccessFactors architecture, data models, and integrations. Familiarity with Azure, ADF or AppFabric for data integration. Experience with Power BI for data visualization. Proficiency in cloud computing, microservices architecture, and containerization. Experience with enterprise integration technologies such as ESBs and API gateways. Strong understanding of IT security and experience designing secure solutions. Experience in agile environments and DevOps methodologies. Excellent communication, stakeholder management, and problem-solving skills. Ability to work effectively in cross-functional, fast-paced environments. Life at RWS RWS is a content solutions company, powered by technology and human expertise. We grow the value of ideas, data and content by making sure organizations are understood. Everywhere. Our proprietary technology, 45+ AI patents and human experts help organizations bring ideas to market faster, build deeper relationships across borders and cultures, and enter new markets with confidence – growing their business and connecting them to a world of opportunities. It’s why over 80 of the world’s top 100 brands trust RWS to drive innovation, inform decisions and shape brand experiences. With 60+ global locations, across five continents, our teams work with businesses across almost all industries. Innovating since 1958, RWS is headquartered in the UK and publicly listed on AIM, the London Stock Exchange regulated market (RWS.L). RWS Values We Partner, We Pioneer, We Progress – and we´ll Deliver together. For further information, please visit: RWS RWS embraces DEI and promotes equal opportunity, we are an Equal Opportunity Employer and prohibit discrimination and harassment of any kind. RWS is committed to the principle of equal employment opportunity for all employees and to providing employees with a work environment free of discrimination and harassment. All employment decisions at RWS are based on business needs, job requirements and individual qualifications, without regard to race, religion, nationality, ethnicity, sex, age, disability, or sexual orientation. RWS will not tolerate discrimination based on any of these characteristics Recruitment Agencies: RWS Holdings PLC does not accept agency resumes. Please do not forward any unsolicited resumes to any RWS employees. Any unsolicited resume received will be treated as the property of RWS and Terms & Conditions associated with the use of such resume will be considered null and void. RWS. Smarter content starts here. www.rws.com Show more Show less

Posted 4 days ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Description Key Responsibilities Develop and maintain supply chain analytics to monitor operational performance and trends. Lead and participate in Six Sigma and supply chain improvement initiatives. Ensure data integrity and consistency across all analytics and reporting platforms. Design and implement reporting solutions for key supply chain KPIs. Analyze KPIs to identify improvement opportunities and develop actionable insights. Build and maintain repeatable, scalable analytics using business systems and BI tools. Conduct scenario modeling and internal/external benchmarking. Provide financial analysis to support supply chain decisions. Collaborate with global stakeholders to understand requirements and deliver impactful solutions. Responsibilities Qualifications Bachelor’s degree in Engineering, Computer Science, Supply Chain, or a related field. Relevant certifications in BI tools, Agile methodologies, or cloud platforms are a plus. This position may require licensing for compliance with export controls or sanctions regulations. Qualifications Experience 8–10 years of total experience, with at least 6 years in a relevant analytics or supply chain role. Proven experience in leading small teams and managing cross-functional projects. Technical Skills Expertise in : SQL, SQL Server, SSIS, SSAS, Power BI. Advanced DAX development for complex reporting needs. Performance optimization for SQL and SSAS environments. Cloud and Data Engineering : Azure Synapse, Azure Data Factory (ADF), Python, Snowflake Agile methodology : Experience working in Agile teams and sprints. Job Supply Chain Planning Organization Cummins Inc. Role Category Hybrid Job Type Exempt - Experienced ReqID 2415717 Relocation Package No Show more Show less

Posted 4 days ago

Apply

5.0 years

0 Lacs

Bhopal, Madhya Pradesh, India

On-site

Linkedin logo

At Iron Mountain we know that work, when done well, makes a positive impact for our customers, our employees, and our planet. That’s why we need smart, committed people to join us. Whether you’re looking to start your career or make a change, talk to us and see how you can elevate the power of your work at Iron Mountain. We provide expert, sustainable solutions in records and information management, digital transformation services, data centers, asset lifecycle management, and fine art storage, handling, and logistics. We proudly partner every day with our 225,000 customers around the world to preserve their invaluable artifacts, extract more from their inventory, and protect their data privacy in innovative and socially responsible ways. Are you curious about being part of our growth stor y while evolving your skills in a culture that will welcome your unique contributions? If so, let's start the conversation. About the role: As a Senior Executive – Digital Solutions at Iron Mountain, you will be primarily responsible for managing scanning and digitization projects at both customer sites and IMI facilities. This includes supervising and coordinating in-house teams as well as vendor resources, ensuring seamless, high-quality, and on-time project delivery aligned with the defined scope of work. You will also handle key project milestones such as Proof of Concept (POC), User Acceptance Testing (UAT), and Work Completion Certifications (WCC). Additionally, you will support vertical leads in achieving monthly, quarterly, and annual revenue targets. You should be collaborative, open to automation opportunities, and comfortable working with advanced scanning and production imaging equipment. Qualifications and Skills: Target-driven and self-motivated team player with a strong understanding of scanning, digitization, metadata handling, Document Management Systems (DMS), workflow processes, and automation of repetitive tasks. Prior experience managing scanning and digitization projects involving both in-house and outsourced/vendor teams. Minimum 2–5 years of relevant industry experience, preferably having led teams of 50+ members. Proficient in Google Sheets and skilled in MIS reporting. Education: Graduation is mandatory; an MBA in Operations is preferred. Familiarity with production scanners such as ADF, Overhead, Flatbed, BookEye, etc. Customer-focused mindset with a willingness to relocate based on project requirements. A proven track record in digitization projects will be an added advantage. Category: Operations Group Iron Mountain is a global leader in storage and information management services trusted by more than 225,000 organizations in 60 countries. We safeguard billions of our customers’ assets, including critical business information, highly sensitive data, and invaluable cultural and historic artifacts. Take a look at our history here. Iron Mountain helps lower cost and risk, comply with regulations, recover from disaster, and enable digital and sustainable solutions, whether in information management, digital transformation, secure storage and destruction, data center operations, cloud services, or art storage and logistics. Please see our Values and Code of Ethics for a look at our principles and aspirations in elevating the power of our work together. If you have a physical or mental disability that requires special accommodations, please let us know by sending an email to accommodationrequest@ironmountain.com. See the Supplement to learn more about Equal Employment Opportunity. Iron Mountain is committed to a policy of equal employment opportunity. We recruit and hire applicants without regard to race, color, religion, sex (including pregnancy), national origin, disability, age, sexual orientation, veteran status, genetic information, gender identity, gender expression, or any other factor prohibited by law. To view the Equal Employment Opportunity is the Law posters and the supplement, as well as the Pay Transparency Policy Statement, CLICK HERE Requisition: J0088899 Show more Show less

Posted 4 days ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Summary: We are hiring an experienced Application Security Engineer specializing in Java ADF and Jasper Reports, with a strong track record of resolving Vulnerability Assessment and Penetration Testing (VAPT) findings. The ideal candidate must have secured complex enterprise applications, including online payments and eCommerce systems, particularly on legacy stacks such as Java 1.7, MySQL 5.5, and JBoss 7.1. This role is hands-on and remediation-focused, requiring deep understanding of secure development and hardening in deprecated environments. Key Responsibilities: Lead remediation of high-priority VAPT findings in large-scale enterprise systems. Secure passwords and PII data at all stages: At view/input: masking, form validation, secure front-end patterns In transit: TLS, secure headers, HTTPS enforcement At rest: encryption, proper salting and hashing (e.g., bcrypt, SHA-256) Fix injection attacks (SQLi, XSS, LDAPi, command injection), CSRF, clickjacking, IDOR, and other OWASP Top 10 issues. Apply secure API integration practices: auth tokens, rate limiting, input validation. Harden session and cookie management (HttpOnly, Secure, SameSite attributes, session fixation prevention). Review and fix insecure code in ADF Faces, Task Flows, Bindings, BC4J, and Jasper Reports. Secure Jasper Reports generation and access (parameter validation, report-level authorization, export sanitization). Work hands-on with legacy platforms: Java 1.7, MySQL 5.5, JBoss 7.1 — applying secure remediation without disrupting production. Strengthen security of online payment/eCommerce systems with proven compliance (e.g., PCI-DSS). Maintain detailed remediation logs, documentation, and evidence for audits and compliance (GDPR, DPDPA, STQC, etc.). Technical Skills: Java EE, Oracle ADF (ADF Faces, Task Flows, BC4J), Jasper Reports Studio/XML Strong debugging skills in Java 1.7, MySQL 5.5, JBoss 7.1 Secure development lifecycle practices with a focus on legacy modernization Strong grounding in OWASP Top 10, SANS 25, CVSS, and secure coding principles Experience in PII handling, data masking, salting, and hashing Proficiency in OAuth2, SAML, JWT, and RBAC security models Performance improvement and application profiling Expertise in analyzing application, system, and security logs to identify and fix issues Ability to ensure application stability and high availability Be the champion/lead and guide the team to fix the issues PHP experience is a plus, especially in legacy web app environments Required Experience: 5–10+ years in application development and security Demonstrated experience remediating security vulnerabilities in eCommerce and payment platforms Ability to work independently in production environments with deprecated technologies Preferred Qualifications / Plus: B.E./B.Tech/MCA in Computer Science, IT, or Cybersecurity Use of AI tools for identification and fixing the issues is real plus Any VAPT or Application Security Certification is a plus (e.g., CEH, OSCP, CSSLP, GWAPT, Oracle Certified Expert) Familiarity with compliance standards: PCI-DSS, GDPR, DPDPA, STQC Proficiency with security tools: Fortify, ZAP, SonarQube, Checkmarx, Burp Suite Soft Skills: Strong problem-solving and diagnostic capabilities, especially in large monolithic codebases Good documentation and communication skills for cross-functional collaboration Able to work under pressure, troubleshoot complex issues, and deliver secure code fixes rapidly Show more Show less

Posted 4 days ago

Apply

3.0 years

0 Lacs

Greater Chennai Area

On-site

Linkedin logo

Who You'll Work With Driving lasting impact and building long-term capabilities with our clients is not easy work. You are the kind of person who thrives in a high performance/high reward culture - doing hard things, picking yourself up when you stumble, and having the resilience to try another way forward. In return for your drive, determination, and curiosity, we'll provide the resources, mentorship, and opportunities you need to become a stronger leader faster than you ever thought possible. Your colleagues—at all levels—will invest deeply in your development, just as much as they invest in delivering exceptional results for clients. Every day, you'll receive apprenticeship, coaching, and exposure that will accelerate your growth in ways you won’t find anywhere else. When you join us, you will have Continuous learning Our learning and apprenticeship culture, backed by structured programs, is all about helping you grow while creating an environment where feedback is clear, actionable, and focused on your development. The real magic happens when you take the input from others to heart and embrace the fast-paced learning experience, owning your journey. A voice that matters From day one, we value your ideas and contributions. You’ll make a tangible impact by offering innovative ideas and practical solutions. We not only encourage diverse perspectives, but they are critical in driving us toward the best possible outcomes. Global community With colleagues across 65+ countries and over 100 different nationalities, our firm’s diversity fuels creativity and helps us come up with the best solutions for our clients. Plus, you’ll have the opportunity to learn from exceptional colleagues with diverse backgrounds and experiences. World-class benefits On top of a competitive salary (based on your location, experience, and skills), we provide a comprehensive benefits package, which includes medical, dental, mental health, and vision coverage for you, your spouse/partner, and children. Your Impact As a Data Engineer I at McKinsey & Company, you will play a key role in designing, building, and deploying scalable data pipelines and infrastructure that enable our analytics and AI solutions. You will work closely with product managers, developers, asset owners, and client stakeholders to turn raw data into trusted, structured, and high-quality datasets used in decision-making and advanced analytics. Your core responsibilities will include Developing robust, scalable data pipelines for ingesting, transforming, and storing data from multiple structured and unstructured sources using Python/SQL. Creating and optimizing data models and data warehouses to support reporting, analytics, and application integration. Working with cloud-based data platforms (AWS, Azure, or GCP) to build modern, efficient, and secure data solutions. Contributing to R&D projects and internal asset development. Contributing to infrastructure automation and deployment pipelines using containerization and CI/CD tools. Collaborating across disciplines to integrate data engineering best practices into broader analytical and generative AI (gen AI) workflows. Supporting and maintaining data assets deployed in client environments with a focus on reliability, scalability, and performance. Furthermore, you will have opportunity to explore and contribute to solutions involving generative AI, such as vector embeddings, retrieval-augmented generation (RAG), semantic search, and LLM-based prompting, especially as we integrate gen AI capabilities into our broader data ecosystem. Your Qualifications and Skills Bachelor’s degree in computer science, engineering, mathematics, or a related technical field (or equivalent practical experience). 3+ years of experience in data engineering, analytics engineering, or a related technical role. Strong Python programming skills with demonstrated experience building scalable data workflows and ETL/ELT pipelines. Proficient in SQL with experience designing normalized and denormalized data models. Hands-on experience with orchestration tools such as Airflow, Kedro, or Azure Data Factory (ADF). Familiarity with cloud platforms (AWS, Azure, or GCP) for building and managing data infrastructure. Discernable communication skills, especially around breaking down complex structures into digestible and relevant points for a diverse set of clients and colleagues, at all levels. High-value personal qualities including critical thinking and creative problem-solving skills; an ability to influence and work in teams. Entrepreneurial mindset and ownership mentality are must; desire to learn and develop, within a dynamic, self-led organization. Hands-on experience with containerization technologies (Docker, Docker-compose). Hands on experience with automation frameworks (Github Actions, CircleCI, Jenkins, etc.). Exposure to generative AI tools or concepts (e.g., OpenAI, Cohere, embeddings, vector databases). Experience working in Agile teams and contributing to design and architecture discussions. Contributions to open-source projects or active participation in data engineering communities. Show more Show less

Posted 4 days ago

Apply

5.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Senior Data Engineer Experience: 5+ Years Location: Remote Contract Duration: Short Term Work Time: IST Shift Job Description We are seeking a skilled and experienced Senior Data Engineer to develop scalable and optimized data pipelines using the Databricks Lakehouse platform. The role requires proficiency in Apache Spark, PySpark, cloud data services (AWS, Azure, GCP), and solid programming knowledge in Python and Java. The engineer will collaborate with cross-functional teams to design and deliver high-performing data solutions. Responsibilities Data Pipeline Development Build efficient ETL/ELT workflows using Databricks and Spark for batch and streaming data Utilize Delta Lake and Unity Catalog for structured data management Optimize Spark jobs using tuning techniques such as caching, partitioning, and serialization Cloud-Based Implementation Develop and deploy data workflows on AWS (S3, EMR, Glue), Azure (ADLS, ADF, Synapse), and/or GCP (GCS, Dataflow, BigQuery) Manage and optimize data storage, access control, and orchestration using native cloud tools Implement data ingestion and querying with Databricks Auto Loader and SQL Warehousing Programming and Automation Write clean, reusable, and production-grade code in Python and Java Automate workflows using orchestration tools like Airflow, ADF, or Cloud Composer Implement testing, logging, and monitoring mechanisms Collaboration and Support Work closely with data analysts, scientists, and business teams to meet data requirements Support and troubleshoot production workflows Document solutions, maintain version control, and follow Agile/Scrum methodologies Required Skills Technical Skills Databricks: Experience with notebooks, cluster management, Delta Lake, Unity Catalog, and job orchestration Spark: Proficient in transformations, joins, window functions, and tuning Programming: Strong in PySpark and Java, with data validation and error handling expertise Cloud: Experience with AWS, Azure, or GCP data services and security frameworks Tools: Familiarity with Git, CI/CD, Docker (preferred), and data monitoring tools Experience 5–8 years in data engineering or backend development Minimum 1–2 years of hands-on experience with Databricks and Spark Experience with large-scale data migration, processing, or analytics projects Certifications (Optional but Preferred) Databricks Certified Data Engineer Associate Working Conditions Full-time remote work with availability during IST hours Occasional on-site presence may be required during client visits No regular travel required On-call support expected during deployment phases Show more Show less

Posted 4 days ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Required Skills: YOE-8+ Mode Of work: Remote Design, develop, modify, and test software applications for the healthcare industry in agile environment. Duties include: Develop. support/maintain and deploy software to support a variety of business needs Provide technical leadership in the design, development, testing, deployment and maintenance of software solutions Design and implement platform and application security for applications Perform advanced query analysis and performance troubleshooting Coordinate with senior-level stakeholders to ensure the development of innovative software solutions to complex technical and creative issues Re-design software applications to improve maintenance cost, testing functionality, platform independence and performance Manage user stories and project commitments in an agile framework to rapidly deliver value to customers deploy and operate software solutions using DevOps model. Required skills: Azure Deltalake, ADF, Databricks, PySpark, Oozie, Airflow, Big Data technologies( HBASE, HIVE), CI/CD (GitHub/Jenkins) Show more Show less

Posted 4 days ago

Apply

2.0 - 3.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Job Title JOB Description – Middleware L1 Middleware Administrator – L1 Roles And Responsibility We are looking for a passionate candidate who can perform Middleware L1 level tasks. Eligibility SN Skill Experience 1 Middleware Administrator – L1 B.E. / B. Tech/BCA (On-Site), Relevant Certification (Preference). 2-3 Years relevant experience, ITIL Trained, OEM Certified on Minimum One Technology. Technology: Oracle Forms, Oracle Fusion Middleware Desired Skills & Experience ✓ Should be a Team player ✓ Communication and Problem-Solving – should have good communication skills and the ability to solve problems ✓ Process Knowledge – Working knowledge of ITSM tool & knowledge on ITIL process i.e. SR, Incident, Change, Release & Problem Management etc. ✓ Should have Collaborative approach, Adapatibility Technical Skills ✓ Oracle Applications: Oracle Forms 10g ,Oracle SSO 10g, OID 10g ,Oracle Portal 10g, Oracle Reports 10g, Internet Application Server (OAS) 10.1.2.2.0, Oracle Web Server (OWS) 10.1.2.2.0, Oracle WebCenter Portal 12.2.1.3, Oracle Access Manager 12.2.1.3, Oracle Internet Directory 12.2.1.3, Oracle WebLogic Server 12.2.1.3, Oracle HTTP Server 12.2.1.3, Oracle ADF 12.2.1.3 (Fusion middleware), Oracle Forms 12.2.1.3, Oracle Reports12.2.1.3, Mobile apps, tomcat etc ✓ Microsoft Applications: Windows IIS, portal, Web cache, BizTalk application and DNS applications ✓ Operating systems: RHEL 7, 8, 9 ✓ Tools & Utilities: ITSM Tools (ServiceNow, Symphony SUMMIT), JIRA Key Responsibilities Application Monitoring Services ✓ Monitor application response times from the end-user perspective in real time and alert organizations when performance is unacceptable. By alerting the user to problems and intelligently segmenting response times, it should quickly expose problem sources and minimizes the time necessary for resolution. ✓ It should allow specific application transactions to be captured and monitored separately. This allows administrators to select the most important operations within business-critical applications to be measured and tracked individually. ✓ It should use baseline-oriented thresholds to raise alerts when application response times deviate from acceptable levels. This allows IT administrators to respond quickly to problems and minimize the impact on service delivery. ✓ It should automatically segment response-time information into network, server and local workstation components to easily identify the source of bottlenecks. ✓ Monitoring of applications, including Oracle Forms 10g ,Oracle SSO 10g ,OID 10g, Oracle Portal 10g ,Oracle Reports 10g ,Internet Application Server (OAS) 10.1.2.2.0, Oracle Web Server (OWS) 10.1.2.2.0, Oracle WebCenter Portal 12.2.1.3 ,Oracle Access Manager 12.2.1.3,Oracle Internet Directory 12.2.1.3,Oracle WebLogic Server 12.2.1.3,Oracle HTTP Server 12.2.1.3, Oracle ADF 12.2.1.3 (Fusion middleware) ,Oracle Forms 12.2.1.3,Oracle Reports12.2.1.3,mobile apps, Windows IIS, portal, web cache, BizTalk application and DNS applications, tomcat etc. ✓ Shutdown and start-up of applications, generation of MIS reports, monitoring of application load user account management scripts execution, analysing system events, monitoring of error logs etc. ✓ Compliance to daily health checklist, portal Updation ✓ Logging of system events and incidents ✓ SR, Incidents tickets Updation in Symphony iServe Tool Application Release Management ✓ Scheduling, coordinating and managing releases for application ✓ Take application code backup, place new code and restart the services Show more Show less

Posted 4 days ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D And A) – Azure Data Engineer - Senior As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We’re looking for candidates with strong technology and data understanding in big data engineering space, having proven delivery capability. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team. Your Key Responsibilities Develop & deploy big data pipelines in a cloud environment using Azure Cloud services ETL design, development and migration of existing on-prem ETL routines to Cloud Service Interact with senior leaders, understand their business goals, contribute to the delivery of the workstreams Design and optimize model codes for faster execution Skills And Attributes For Success Overall 3+ years of IT experience with 2+ Relevant experience in Azure Data Factory (ADF) and good hands-on with Exposure to latest ADF Version Hands-on experience on Azure functions & Azure synapse (Formerly SQL Data Warehouse) Should have project experience in Azure Data Lake / Blob (Storage purpose) Should have basic understanding on Batch Account configuration, various control options Sound knowledge in Data Bricks & Logic Apps Should be able to coordinate independently with business stake holders and understand the business requirements, implement the requirements using ADF To qualify for the role, you must have Be a computer science graduate or equivalent with 3-7 years of industry experience Have working experience in an Agile base delivery methodology (Preferable) Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support. Ideally, you’ll also have Client management skills What We Look For People with technical experience and enthusiasm to learn new things in this fast-moving environment What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 4 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Project Description: Our client is an EU subsidiary of a Global Financial Bank working in multiple markets and asset classes. The Bank's Data Store has been transformed to a Data warehouse (DWH) which is the central source for Regulatory Reporting. It is also intended to be the core data integration platform which not only provide date for regulatory reporting but also provide data for Risk Modelling, Portfolio Analysis, Ad Hoc Analysis & Reporting (Finance, Risk, other), MI Reporting, Data Quality Management, etc. Due to high demand of regulatory requirements, a lot of regulatory projects are in progress to reflect regulatory requirements on existing regulatory reports and to develop new regulatory reports on MDS. Examples are IFRS9, AnaCredit, IRBB, the new Deposit Guarantee Directive (DGSD), Bank Data Retrieval Portal (BDRP) and the Fundamental Review of the Trading Book (FRTB). DWH / ETL Tester will work closely with the Development Team to design, build interfaces and integrate data from a variety from internal and external data sources into the new Enterprise Data Warehouse environment. The ETL Tester will be primarily responsible for testing Enterprise Data Warehouse using Automation within industry recognized ETL standards, architecture, and best practices. Responsibilities: Testing the Bank's data warehouse system changes, testing the changes (user stories), support IT integration testing in TST and support business stakeholders with User Acceptance Testing. It is hands-on position: you will be required to write and execute test cases, build test automation where it is applicable. Overall Purpose of Job - Test the MDS data warehouse system - Validate regulatory reports - Supporting IT and Business stakeholders during UAT phase - Contribute to improvement of testing and development processes - Work as part of a cross-functional team and take ownership of tasks - Contribute in Testing Deliverables. - Ensure the implementation of test standards and best practices for the agile model & contributes to their development. - Engage with internal stakeholders in various areas of the organization to seek alignment and collaboration. - Deals with external stakeholders / Vendors. - Identify risks / issues and present associated mitigating actions taking into account the critically of the domain of the underlying business. - Contribute to continuous improvement of testing standard processes. Additional responsibilities include work closely with the systems analysts and the application developers, utilize functional design documentation and technical specifications to facilitate the creation and execution of manual and automated test scripts, perform data analysis and creation of test data, track and help resolve defects and ensure that all testing is conducted and documented in adherence with the bank's standard. Mandatory Skills: Data Warehouse (DWH) ETL Test Management Mandatory Skills Description: Must have experience/expertise : Tester, Test Automation, Data Warehouse, Banking Technical: - At least 5 years of testing experience of which at least 2 years in the finance industry with good level knowledge on Data Warehouse, RDBMS concepts. - Strong SQL scripting knowledge and hands-on experience and experience with ETL & Databases. - Expertise on new age cloud based Data Warehouse solutions - ADF, SnowFlake, GCP etc. - Hands-On expertise in writing complex SQL using multiple JOINS and highly complex functions to test various transformations and ETL requirements. - Knowledge and Experience on creating Test Automation for Database and ETL Testing Regression Suite. - Automation using Selenium with Python (or Java Script), Python Scripts, Shell Script. - Knowledge of framework designing, REST API Testing of databases using Python. - Experience using Atlassian tool set, Azure DevOps and code & Version Management - GIT, Bitbucket, Azure Repos etc. - Help and provide inputs for the creation of Test Plan to address the needs of Cloud Based ETL Pipelines. Non-Technical: - Able to work in an agile environment - Experience in working in high priority projects (high pressure on delivery) - Some flexibility outside 9-5 working hours (Netherlands Time zone) - Able to work in demanding environment and have pragmatic approach with "can do" attitude. - Able to work independently and also to collaborate across the organization - Highly developed problem-solving skills with minimal supervision - Able to easily adapt to new circumstances / technologies / procedures. - Stress resistant and constructive - whatever the context. - Able to align with existing standards and acting with attention to detail. Nice-to-Have Skills Description: - Experience of financial regulatory reports - Experience in test automation for data warehouse (using bamboo) Software skills: - Bitbucket - Bamboo - Azure Tech Stack - Azure Data Factory - WKFS OneSumX reporting generator - Analytics tool such as Power BI / Excel / SSRS / SSAS, WinSCP Show more Show less

Posted 4 days ago

Apply

3.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D And A) – Azure Data Engineer - Senior As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We’re looking for candidates with strong technology and data understanding in big data engineering space, having proven delivery capability. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team. Your Key Responsibilities Develop & deploy big data pipelines in a cloud environment using Azure Cloud services ETL design, development and migration of existing on-prem ETL routines to Cloud Service Interact with senior leaders, understand their business goals, contribute to the delivery of the workstreams Design and optimize model codes for faster execution Skills And Attributes For Success Overall 3+ years of IT experience with 2+ Relevant experience in Azure Data Factory (ADF) and good hands-on with Exposure to latest ADF Version Hands-on experience on Azure functions & Azure synapse (Formerly SQL Data Warehouse) Should have project experience in Azure Data Lake / Blob (Storage purpose) Should have basic understanding on Batch Account configuration, various control options Sound knowledge in Data Bricks & Logic Apps Should be able to coordinate independently with business stake holders and understand the business requirements, implement the requirements using ADF To qualify for the role, you must have Be a computer science graduate or equivalent with 3-7 years of industry experience Have working experience in an Agile base delivery methodology (Preferable) Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support. Ideally, you’ll also have Client management skills What We Look For People with technical experience and enthusiasm to learn new things in this fast-moving environment What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 4 days ago

Apply

3.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D And A) – Azure Data Engineer - Senior As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We’re looking for candidates with strong technology and data understanding in big data engineering space, having proven delivery capability. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team. Your Key Responsibilities Develop & deploy big data pipelines in a cloud environment using Azure Cloud services ETL design, development and migration of existing on-prem ETL routines to Cloud Service Interact with senior leaders, understand their business goals, contribute to the delivery of the workstreams Design and optimize model codes for faster execution Skills And Attributes For Success Overall 3+ years of IT experience with 2+ Relevant experience in Azure Data Factory (ADF) and good hands-on with Exposure to latest ADF Version Hands-on experience on Azure functions & Azure synapse (Formerly SQL Data Warehouse) Should have project experience in Azure Data Lake / Blob (Storage purpose) Should have basic understanding on Batch Account configuration, various control options Sound knowledge in Data Bricks & Logic Apps Should be able to coordinate independently with business stake holders and understand the business requirements, implement the requirements using ADF To qualify for the role, you must have Be a computer science graduate or equivalent with 3-7 years of industry experience Have working experience in an Agile base delivery methodology (Preferable) Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support. Ideally, you’ll also have Client management skills What We Look For People with technical experience and enthusiasm to learn new things in this fast-moving environment What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 4 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D And A) – Azure Data Engineer - Senior As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We’re looking for candidates with strong technology and data understanding in big data engineering space, having proven delivery capability. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team. Your Key Responsibilities Develop & deploy big data pipelines in a cloud environment using Azure Cloud services ETL design, development and migration of existing on-prem ETL routines to Cloud Service Interact with senior leaders, understand their business goals, contribute to the delivery of the workstreams Design and optimize model codes for faster execution Skills And Attributes For Success Overall 3+ years of IT experience with 2+ Relevant experience in Azure Data Factory (ADF) and good hands-on with Exposure to latest ADF Version Hands-on experience on Azure functions & Azure synapse (Formerly SQL Data Warehouse) Should have project experience in Azure Data Lake / Blob (Storage purpose) Should have basic understanding on Batch Account configuration, various control options Sound knowledge in Data Bricks & Logic Apps Should be able to coordinate independently with business stake holders and understand the business requirements, implement the requirements using ADF To qualify for the role, you must have Be a computer science graduate or equivalent with 3-7 years of industry experience Have working experience in an Agile base delivery methodology (Preferable) Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support. Ideally, you’ll also have Client management skills What We Look For People with technical experience and enthusiasm to learn new things in this fast-moving environment What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 4 days ago

Apply

5.0 years

0 Lacs

Trivandrum, Kerala, India

Remote

Linkedin logo

Role: Senior Data Engineer with Databricks. Experience: 5+ Years Job Type: Contract Contract Duration: 6 Months Budget: 1.0 lakh per month Location : Remote JOB DESCRIPTION: We are looking for a dynamic and experienced Senior Data Engineer – Databricks to design, build, and optimize robust data pipelines using the Databricks Lakehouse platform. The ideal candidate should have strong hands-on skills in Apache Spark, PySpark, cloud data services, and a good grasp of Python and Java. This role involves close collaboration with architects, analysts, and developers to deliver scalable and high-performing data solutions across AWS, Azure, and GCP. ESSENTIAL JOB FUNCTIONS 1. Data Pipeline Development • Build scalable and efficient ETL/ELT workflows using Databricks and Spark for both batch and streaming data. • Leverage Delta Lake and Unity Catalog for structured data management and governance. • Optimize Spark jobs by tuning configurations, caching, partitioning, and serialization techniques. 2. Cloud-Based Implementation • Develop and deploy data workflows onAWS (S3, EMR,Glue), Azure (ADLS, ADF, Synapse), and/orGCP (GCS, Dataflow, BigQuery). • Manage and optimize data storage, access control, and pipeline orchestration using native cloud tools. • Use tools like Databricks Auto Loader and SQL Warehousing for efficient data ingestion and querying. 3. Programming & Automation • Write clean, reusable, and production-grade code in Python and Java. • Automate workflows using orchestration tools(e.g., Airflow, ADF, or Cloud Composer). • Implement robust testing, logging, and monitoring mechanisms for data pipelines. 4. Collaboration & Support • Collaborate with data analysts, data scientists, and business users to meet evolving data needs. • Support production workflows, troubleshoot failures, and resolve performance bottlenecks. • Document solutions, maintain version control, and follow Agile/Scrum processes Required Skills Technical Skills: • Databricks: Hands-on experience with notebooks, cluster management, Delta Lake, Unity Catalog, and job orchestration. • Spark: Expertise in Spark transformations, joins, window functions, and performance tuning. • Programming: Strong in PySpark and Java, with experience in data validation and error handling. • Cloud Services: Good understanding of AWS, Azure, or GCP data services and security models. • DevOps/Tools: Familiarity with Git, CI/CD, Docker (preferred), and data monitoring tools. Experience: • 5–8 years of data engineering or backend development experience. • Minimum 1–2 years of hands-on work in Databricks with Spark. • Exposure to large-scale data migration, processing, or analytics projects. Certifications (nice to have): Databricks Certified Data Engineer Associate Working Conditions Hours of work - Full-time hours; Flexibility for remote work with ensuring availability during US Timings. Overtime expectations - Overtime may not be required as long as the commitment is accomplished Work environment - Primarily remote; occasional on-site work may be needed only during client visit. Travel requirements - No travel required. On-call responsibilities - On-call duties during deployment phases. Special conditions or requirements - Not Applicable. Workplace Policies and Agreements Confidentiality Agreement: Required to safeguard client sensitive data. Non-Compete Agreement: Must be signed to ensure proprietary model security. Non-Disclosure Agreement: Must be signed to ensure client confidentiality and security. Show more Show less

Posted 4 days ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D And A) – Azure Data Engineer - Senior As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We’re looking for candidates with strong technology and data understanding in big data engineering space, having proven delivery capability. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team. Your Key Responsibilities Develop & deploy big data pipelines in a cloud environment using Azure Cloud services ETL design, development and migration of existing on-prem ETL routines to Cloud Service Interact with senior leaders, understand their business goals, contribute to the delivery of the workstreams Design and optimize model codes for faster execution Skills And Attributes For Success Overall 3+ years of IT experience with 2+ Relevant experience in Azure Data Factory (ADF) and good hands-on with Exposure to latest ADF Version Hands-on experience on Azure functions & Azure synapse (Formerly SQL Data Warehouse) Should have project experience in Azure Data Lake / Blob (Storage purpose) Should have basic understanding on Batch Account configuration, various control options Sound knowledge in Data Bricks & Logic Apps Should be able to coordinate independently with business stake holders and understand the business requirements, implement the requirements using ADF To qualify for the role, you must have Be a computer science graduate or equivalent with 3-7 years of industry experience Have working experience in an Agile base delivery methodology (Preferable) Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support. Ideally, you’ll also have Client management skills What We Look For People with technical experience and enthusiasm to learn new things in this fast-moving environment What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 5 days ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D And A) – Azure Data Engineer - Senior As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We’re looking for candidates with strong technology and data understanding in big data engineering space, having proven delivery capability. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team. Your Key Responsibilities Develop & deploy big data pipelines in a cloud environment using Azure Cloud services ETL design, development and migration of existing on-prem ETL routines to Cloud Service Interact with senior leaders, understand their business goals, contribute to the delivery of the workstreams Design and optimize model codes for faster execution Skills And Attributes For Success Overall 3+ years of IT experience with 2+ Relevant experience in Azure Data Factory (ADF) and good hands-on with Exposure to latest ADF Version Hands-on experience on Azure functions & Azure synapse (Formerly SQL Data Warehouse) Should have project experience in Azure Data Lake / Blob (Storage purpose) Should have basic understanding on Batch Account configuration, various control options Sound knowledge in Data Bricks & Logic Apps Should be able to coordinate independently with business stake holders and understand the business requirements, implement the requirements using ADF To qualify for the role, you must have Be a computer science graduate or equivalent with 3-7 years of industry experience Have working experience in an Agile base delivery methodology (Preferable) Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support. Ideally, you’ll also have Client management skills What We Look For People with technical experience and enthusiasm to learn new things in this fast-moving environment What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 5 days ago

Apply

8.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Technical Skills: 8+ years of hands-on experience in SQL development, query optimization, and performance tuning. Expertise in ETL tools (SSIS, Azure ADF, Databricks, Snowflake or similar) and relational databases (SQL Server, PostgreSQL, MySQL, Oracle). Strong understanding of data warehousing concepts, data modeling, indexing strategies, and query execution plans. Proficiency in writing efficient stored procedures, views, triggers, and functions for large datasets. Experience working with structured and semi-structured data (CSV, JSON, XML, Parquet). Hands-on experience in data validation, cleansing, and reconciliation to maintain high data quality. Exposure to real-time and batch data processing techniques. Nice-to-have: Experience with Azure/Other Data Engineering (ADF, Azure SQL, Synapse, Databricks, Snowflake), Python, Spark, NoSQL databases, and reporting tools like Power BI or Tableau. Strong problem-solving skills and the ability to troubleshoot ETL failures and performance issues. Ability to collaborate with business and analytics teams to understand and implement data requirements. Show more Show less

Posted 5 days ago

Apply

0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

Linkedin logo

Technology->Cloud Integration->Azure Data Factory (ADF) Technology->Cloud Platform->Azure Analytics Services->Azure Data Lake Technology->Cloud Platform->Power Platform Technology->IOT Platform->AWS IOT A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Knowledge of more than one technology Basics of Architecture and Design fundamentals Knowledge of Testing tools Knowledge of agile methodologies Understanding of Project life cycle activities on development and maintenance projects Understanding of one or more Estimation methodologies, Knowledge of Quality processes Basics of business domain to understand the business requirements Analytical abilities, Strong Technical Skills, Good communication skills Good understanding of the technology and domain Ability to demonstrate a sound understanding of software quality assurance principles, SOLID design principles and modelling methods Awareness of latest technologies and trends Excellent problem solving, analytical and debugging skills Show more Show less

Posted 5 days ago

Apply

3.0 years

3 - 7 Lacs

Hyderābād

On-site

Engineer - Oracle Fusion Tech/OIC Should have at least 3 years of experience in implementing Oracle ERP and OIC projects Proven experience with Oracle Integration Cloud (OIC) and expertise in Oracle Cloud services. Strong knowledge of integration technologies, protocols, and standards (REST, SOAP, JSON, XML, etc.). Must work with PLSQL, Web Services, SOAP and REST. Development experience on SOAP, REST services with JSON or XML based integrations. Experience in working with OIC adapters included ERP Adapter, SOAP, Oracle DB, FTP, REST etc. Being Hands-On is a critical requirement. Should have experience in developing BI, OTBI Reports. Should be capable of providing optimal solution and building integrations both inbound and outbound - SOAP and Rest Web Services, FBDI, event based and ADF DI Should be able to develop reports and OIC services based on the requirements/design documents Should have experience in building integrations with Oracle SaaS applications. Follow the project through to the successful adoption of the solution Good understanding of Oracle PaaS architecture and security concepts and working experience in ATP and object storage • Understanding of Finance modules flow is necessary. • Comfortable in XML and XSLT processing. Experience in working with OIC adapters included ERP Adapter, SOAP, Oracle DB, FTP, REST Hands on experience with web service testing tools SOAPUI, Postman. Bachelors/ Master’s degree in computer science or Software Engineering

Posted 5 days ago

Apply

2.0 years

0 Lacs

Hyderābād

On-site

Overview: Data Science Team works in developing Machine Learning (ML) and Artificial Intelligence (AI) projects. Specific scope of this role is to develop ML solution in support of ML/AI projects using big analytics toolsets in a CI/CD environment. Analytics toolsets may include DS tools/Spark/Databricks, and other technologies offered by Microsoft Azure or open-source toolsets. This role will also help automate the end-to-end cycle with Azure Pipelines. You will be part of a collaborative interdisciplinary team around data, where you will be responsible of our continuous delivery of statistical/ML models. You will work closely with process owners, product owners and final business users. This will provide you the correct visibility and understanding of criticality of your developments. Responsibilities: Delivery of key Advanced Analytics/Data Science projects within time and budget, particularly around DevOps/MLOps and Machine Learning models in scope Active contributor to code & development in projects and services Partner with data engineers to ensure data access for discovery and proper data is prepared for model consumption. Partner with ML engineers working on industrialization. Communicate with business stakeholders in the process of service design, training and knowledge transfer. Support large-scale experimentation and build data-driven models. Refine requirements into modelling problems. Influence product teams through data-based recommendations. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create reusable packages or libraries. Ensure on time and on budget delivery which satisfies project requirements, while adhering to enterprise architecture standards Leverage big data technologies to help process data and build scaled data pipelines (batch to real time) Implement end-to-end ML lifecycle with Azure Databricks and Azure Pipelines Automate ML models deployments Qualifications: BE/B.Tech in Computer Science, Maths, technical fields. Overall 2-4 years of experience working as a Data Scientist. 2+ years’ experience building solutions in the commercial or in the supply chain space. 2+ years working in a team to deliver production level analytic solutions. Fluent in git (version control). Understanding of Jenkins, Docker are a plus. Fluent in SQL syntaxis. 2+ years’ experience in Statistical/ML techniques to solve supervised (regression, classification) and unsupervised problems. 2+ years’ experience in developing business problem related statistical/ML modeling with industry tools with primary focus on Python or Pyspark development. Data Science – Hands on experience and strong knowledge of building machine learning models – supervised and unsupervised models. Knowledge of Time series/Demand Forecast models is a plus Programming Skills – Hands-on experience in statistical programming languages like Python, Pyspark and database query languages like SQL Statistics – Good applied statistical skills, including knowledge of statistical tests, distributions, regression, maximum likelihood estimators Cloud (Azure) – Experience in Databricks and ADF is desirable Familiarity with Spark, Hive, Pig is an added advantage Business storytelling and communicating data insights in business consumable format. Fluent in one Visualization tool. Strong communications and organizational skills with the ability to deal with ambiguity while juggling multiple priorities Experience with Agile methodology for team work and analytics ‘product’ creation. Experience in Reinforcement Learning is a plus. Experience in Simulation and Optimization problems in any space is a plus. Experience with Bayesian methods is a plus. Experience with Causal inference is a plus. Experience with NLP is a plus. Experience with Responsible AI is a plus. Experience with distributed machine learning is a plus Experience in DevOps, hands-on experience with one or more cloud service providers AWS, GCP, Azure(preferred) Model deployment experience is a plus Experience with version control systems like GitHub and CI/CD tools Experience in Exploratory data Analysis Knowledge of ML Ops / DevOps and deploying ML models is preferred Experience using MLFlow, Kubeflow etc. will be preferred Experience executing and contributing to ML OPS automation infrastructure is good to have Exceptional analytical and problem-solving skills Stakeholder engagement-BU, Vendors. Experience building statistical models in the Retail or Supply chain space is a plus

Posted 5 days ago

Apply

5.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Job Family Data Science & Analysis (India) Travel Required None Clearance Required None What You Will Do Design, develop, and maintain robust, scalable, and efficient data pipelines and ETL/ELT processes. Lead and execute data engineering projects from inception to completion, ensuring timely delivery and high quality. Build and optimize data architectures for operational and analytical purposes. Collaborate with cross-functional teams to gather and define data requirements. Implement data quality, data governance, and data security practices. Manage and optimize cloud-based data platforms (Azure\AWS). Develop and maintain Python/PySpark libraries for data ingestion, Processing and integration with both internal and external data sources. Design and optimize scalable data pipelines using Azure data factory and Spark(Databricks) Work with stakeholders, including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Develop frameworks for data ingestion, transformation, and validation. Mentor junior data engineers and guide best practices in data engineering. Evaluate and integrate new technologies and tools to improve data infrastructure. Ensure compliance with data privacy regulations (HIPAA, etc.). Monitor performance and troubleshoot issues across the data ecosystem. Automated deployment of data pipelines using GIT hub actions \ Azure devops What You Will Need Bachelors or master’s degree in computer science, Information Systems, Statistics, Math, Engineering, or related discipline. Minimum 5 + years of solid hands-on experience in data engineering and cloud services. Extensive working experience with advanced SQL and deep understanding of SQL. Good Experience in Azure data factory (ADF), Databricks , Python and PySpark. Good experience in modern data storage concepts data lake, lake house. Experience in other cloud services (AWS) and data processing technologies will be added advantage. Ability to enhance , develop and resolve defects in ETL process using cloud services. Experience handling large volumes (multiple terabytes) of incoming data from clients and 3rd party sources in various formats such as text, csv, EDI X12 files and access database. Experience with software development methodologies (Agile, Waterfall) and version control tools Highly motivated, strong problem solver, self-starter, and fast learner with demonstrated analytic and quantitative skills. Good communication skill. What Would Be Nice To Have AWS ETL Platform – Glue , S3 One or more programming languages such as Java, .Net Experience in US health care domain and insurance claim processing. What We Offer Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace. About Guidehouse Guidehouse is an Equal Opportunity Employer–Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation. Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco. If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at 1-571-633-1711 or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation. All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or guidehouse@myworkday.com. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process. If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse’s Ethics Hotline. If you want to check the validity of correspondence you have received, please contact recruiting@guidehouse.com. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant’s dealings with unauthorized third parties. Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee. Show more Show less

Posted 5 days ago

Apply

10.0 - 15.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

About the company: With over 2.5 crore customers, over 5,000 distribution points and nearly 2,000 branches, IndusInd Bank is a universal bank with a widespread banking footprint across the country. IndusInd offers a wide array of products and services for individuals and corporates including microfinance, personal loans, personal and commercial vehicles loans, credit cards, SME loans. Over the years, IndusInd has grown ceaselessly and dynamically, driven by zeal to offer our customers banking services at par with the highest quality standards in the industry. IndusInd is a pioneer in digital first solutions to bring together the power of next-gen digital product stack, customer excellence and trust of an established bank. Job Purpose: To work on implementing data modeling solutions To design data flow and structure to reduce data redundancy and improving data movement among systems defining a data lineage To work in the Azure Data Warehouse To work with large data volume of data integration Experience With overall experience between 10 to 15 years, applicant must have minimum 8 to 11 years of hard core professional experience in data modeling for large Data Warehouse with multiple Sources. Technical Skills Expertise in core skill of data modeling principles/methods including conceptual, logical & physical Data Models Ability to utilize BI tools like Power BI, Tableau, etc to represent insights Experience in translating/mapping relational data models into XML and Schemas Expert knowledge of metadata management, relational & data modeling tools like ER Studio, Erwin or others. Hands-on experience in relational, dimensional and/or analytical experience (using RDBMS, dimensional, NoSQL, ETL and data ingestion protocols). Very strong in SQL queries Expertise in performance tuning of SQL queries. Ability to analyse source system and create Source to Target mapping. Ability to understand the business use case and create data models or joined data in Datawarehouse. Preferred experience in banking domain and experience in building data models/marts for various banking functions. Good to have knowledge of – -Azure powershell scripting or Python scripting for data transformation in ADF - SSIS, SSAS, BI tools like Power BI -Azure PaaS components like Azure Data Factory, Azure Data Bricks, Azure Data Lake, Azure Synapse (DWH), Polybase, ExpressRoute tunneling, etc. -API integration Responsibility Understanding the existing data model, existing data warehouse design, functional domain subject areas of data, documenting the same with as is architecture and proposed one. Understanding existing ETL process, various sources and analyzing, documenting the best approach to design logical data model where required Work with development team to implement the proposed data model into physical data model, build data flows Work with development team to optimize the database structure with best practices applying optimization methods. Analyze, document and implement to re-use of data model for new initiatives. Will interact with stakeholder, Users, other IT teams to understand the eco system and analyze for solutions Work on user requirements and create queries for creating consumption views for users from the existing DW data. Will train and lead a small team of data engineers. Qualifications Bachelors of Computer Science or Equivalent Should have certification done on Data Modeling and Data Analyst. Good to have a certification of Azure Fundamental and Azure Engineer courses (AZ900 or DP200/201) Behavioral Competencies Should have excellent problem-solving and time management skills Strong analytical thinking skills Applicant should have excellent communication skill and process oriented with flexible execution mindset. Strategic Thinking with Research and Development mindset. Clear and demonstrative communication Efficiently identify and solves issues Identify, track and escalate risks in a timely manner Selection Process: Interested Candidates are mandatorily required to apply through the listing on Jigya. Only applications received through Jigya will be evaluated further. Shortlisted candidates may need to appear in an Online Assessment and/or a Technical Screening interview administered by Jigya, on behalf on IndusInd Bank Candidates selected after the screening rounds will be processed further by IndusInd Bank Show more Show less

Posted 5 days ago

Apply

10.0 - 14.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Hiring: CRM Lead Consultant – Microsoft Dynamics 365 CE/CRM Looking for an experienced CRM Lead Consultant to serve as a technical SME and administrator for Microsoft Dynamics 365 CE/CRM platform. This role is ideal for a highly skilled professional with deep experience in Dynamics customization, integration, reporting, and solution management. 🔧 What You’ll Do Lead development and maintenance of the Dynamics CRM platform Collaborate with business users to gather requirements and architect CRM solutions Build forms, views, dashboards, plugins, workflows, and reports Develop solutions using PowerApps , Azure Data Factory , and automation tools Perform solution deployments and manage GitHub source control Troubleshoot issues and support application performance ✅ What We’re Looking For 10-14 years of experience in Microsoft Dynamics 365 CE/CRM Proficiency in JavaScript, C#, .NET, SQL Server, MVC, FetchXML, REST/OData Hands-on experience with Azure services (ADF, SSIS, DevOps pipelines) Strong knowledge of CRM SDK, security models, and GitHub Bachelor's degree in Computer Science or related STEM field ⭐ Bonus Points Microsoft Dynamics 365 certifications Familiarity with O365 tools (SharePoint, Mobile), Azure SQL, Data Export Service Show more Show less

Posted 5 days ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About tsworks: tsworks is a leading technology innovator, providing transformative products and services designed for the digital-first world. Our mission is to provide domain expertise, innovative solutions and thought leadership to drive exceptional user and customer experiences. Demonstrating this commitment , we have a proven track record of championing digital transformation for industries such as Banking, Travel and Hospitality, and Retail (including e-commerce and omnichannel), as well as Distribution and Supply Chain, delivering impactful solutions that drive efficiency and growth. We take pride in fostering a workplace where your skills, ideas, and attitude shape meaningful customer engagements. About This Role: tsworks Technologies India Private Limited is seeking driven and motivated Senior Data Engineers to join its Digital Services Team. You will get hands-on experience with projects employing industry-leading technologies. This would initially be focused on the operational readiness and maintenance of existing applications and would transition into a build and maintenance role in the long run. Requirements Position: Data Engineer II Experience: 3 to 10+ Years Location: Bangalore, India Mandatory Required Qualification Strong proficiency in Azure services such as Azure Data Factory, Azure Databricks, Azure Synapse Analytics, Azure Storage, etc. Expertise in DevOps and CI/CD implementation Good knowledge in SQL Excellent Communication Skills In This Role, You Will Design, implement, and manage scalable and efficient data architecture on the Azure cloud platform. Develop and maintain data pipelines for efficient data extraction, transformation, and loading (ETL) processes. Perform complex data transformations and processing using Azure Data Factory, Azure Databricks, Snowflake's data processing capabilities, or other relevant tools. Develop and maintain data models within Snowflake and related tools to support reporting, analytics, and business intelligence needs. Collaborate with cross-functional teams to understand data requirements and design appropriate data integration solutions. Integrate data from various sources, both internal and external, ensuring data quality and consistency. Ensure data models are designed for scalability, reusability, and flexibility. Implement data quality checks, validations, and monitoring processes to ensure data accuracy and integrity across Azure and Snowflake environments. Adhere to data governance standards and best practices to maintain data security and compliance. Handling performance optimization in ADF and Snowflake platforms Collaborate with data scientists, analysts, and business stakeholders to understand data needs and deliver actionable insights Provide guidance and mentorship to junior team members to enhance their technical skills. Maintain comprehensive documentation for data pipelines, processes, and architecture within both Azure and Snowflake environments including best practices, standards, and procedures. Skills & Knowledge Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 3 + Years of experience in Information Technology, designing, developing and executing solutions. 3+ Years of hands-on experience in designing and executing data solutions on Azure cloud platforms as a Data Engineer. Strong proficiency in Azure services such as Azure Data Factory, Azure Databricks, Azure Synapse Analytics, Azure Storage, etc. Familiarity with Snowflake data platform would be an added advantage. Hands-on experience in data modelling, batch and real-time pipelines, using Python, Java or JavaScript and experience working with Restful APIs are required. Expertise in DevOps and CI/CD implementation. Hands-on experience with SQL and NoSQL databases. Hands-on experience in data modelling, implementation, and management of OLTP and OLAP systems. Experience with data modelling concepts and practices. Familiarity with data quality, governance, and security best practices. Knowledge of big data technologies such as Hadoop, Spark, or Kafka. Familiarity with machine learning concepts and integration of ML pipelines into data workflows Hands-on experience working in an Agile setting. Is self-driven, naturally curious, and able to adapt to a fast-paced work environment. Can articulate, create, and maintain technical and non-technical documentation. Public cloud certifications are desired. Show more Show less

Posted 5 days ago

Apply

6.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Senior Data Engineer Experience: 6+ Years Location: Remote Employment Type: Full Time Job Summary: We are seeking a highly skilled and experienced Senior Data Engineer to join our dynamic data engineering team. The ideal candidate will have deep expertise in C#, Azure Data Factory (ADF), Databricks, SQL Server, and Python, along with a strong understanding of modern CI/CD practices. You will be responsible for designing, developing, and maintaining scalable and efficient data pipelines and solutions to support analytics, reporting, and operational systems. Key Responsibilities: Design, develop, and optimize complex data pipelines using Azure Data Factory, Databricks, and SQL Server. Show more Show less

Posted 5 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies