Jobs
Interviews

1129 Data Extraction Jobs - Page 46

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4 - 6 years

10 - 14 Lacs

Hyderabad

Work from Office

Are you interested in building the next generation cloud-based highly scalable analytics platform that can handle hundreds of billions of dollars in transactions? Our challenge is to deliver software systems that can accurately capture, process, and report on the enormous volume of financial transactions generated every day as millions of customers make purchases, as thousands of Vendors and Partners are paid, as inventory moves in and out of warehouses, as commissions are calculated, and as taxes are collected in hundreds of jurisdictions worldwide. This is a unique opportunity to be part of a team building a cloud-native data platform. The platform will leverage the latest big data technologies to ingest, store, govern and share the data securely. Key job responsibilities Looking for a candidate to design, develop, and maintain scalable data solutions on AWS. The ideal candidate will have hands-on experience building robust data pipelines and ETL workflows using services such as AWS Glue, Lambda, Step Functions, and EMR. Familiarity with data catalog tools like AWS DataZone or other products, as well as open table formats like Apache Hudi and Iceberg, is highly beneficial. The ideal candidate will have extensive experience in dimensional modeling, excellent problem solving ability dealing with huge volumes of data and a strong curiosity to learn and adapt to new technologies. You will collaborate with cross-functional teams to understand business requirements, architect scalable data solutions, and ensure data quality and integrity. Candidate should have : - Strong understanding of ETL concepts and experience building them with large-scale, complex datasets using traditional or map reduce batch mechanism. - Experience designing and operating data lakes - Expertise in at least one programming language - Scala, Python, Java or similar. - Experience with data orchestration frameworks like Apache Airflow - Good to have - experience working on AWS stack - Clear thinker with superb problem-solving skills to prioritize and stay focused on big needle movers About the team Foundational Data Services (FDS) designs, builds, and maintains a petabyte-scale data lake and a modern data mesh that forms the foundation of data-driven innovation for Finance operations at Amazon. This platform supports hundreds of teams across engineering, science, and business intelligence, enabling them to unlock the full potential of machine learning and advanced analytics. Our mission is to empower FinOps through seamless access to data by delivering a comprehensive suite of solutions focused on efficient data ingestion and storage, strong data governance, and dependable data delivery. If youre passionate about creating scalable, high-impact data solutions that drive financial operations, we d love to have you on our team! - 3+ years of data engineering experience - 4+ years of SQL experience - Experience with data modeling, warehousing and building ETL pipelines - Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions - Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) - Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets

Posted 3 months ago

Apply

3 - 10 years

5 - 6 Lacs

Pune

Work from Office

Key Responsibilities: Validate, and update master data requests, ensuring adherence to data quality rules and country-specific exceptions. Conduct periodic quality checks on master data to maintain accuracy and consistency. Update vendor and customer master data in alignment with Service Level Agreements (SLAs) and data guidelines. Identify, analyze, and resolve duplicate records within the master data system. Collaborate with Data Steward and Governance teams to support data quality initiatives and clean-up activities. Maintain and update Standard Operating Procedures (SOPs) for master data processes. Participate in discussions with Business Partners and Stakeholders, addressing ad-hoc requirements with urgency and professionalism. Utilize SQL queries to extract and analyze data for quality checks and reporting purposes. Create and maintain Power BI dashboards to monitor and report on data quality. Support audit requirements and ensure compliance with master data guidelines. Continuously improve data management processes by identifying areas for enhancement. Requirements: Proven experience in Master Data Management (MDM) or a related field. Excellent analytical and problem-solving skills. Proficiency in SQL for data extraction and analysis. Ability to work independently and collaboratively as part of a team. Strong communication skills for effective stakeholder interaction. Attention to detail with a proactive approach to resolving data issues. Ability to manage multiple tasks and prioritize work effectively. Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. . We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing .

Posted 3 months ago

Apply

1.0 years

3 - 5 Lacs

IN

Remote

About the job: We empower the people who build the world. Taiy .AI is the world's largest infrastructure construction data-mesh technology and the first AI platform for the global infrastructure construction industry. Our clients include some of the largest construction firms, suppliers, and the government. About The Team: We are looking for a Python Engineer to help support and lead our data engineering ops. Key Responsibilities: 1. Developing and executing processes for monitoring data sanity, checking for data availability and reliability. 2. Understanding the business drivers and building insights through data. 3. Partner with stakeholders at all levels to establish current and ongoing data support and reporting needs. 4. Ensure continuous data accuracy and recognize data discrepancies in systems that require immediate attention/escalation. 5. Become an expert in the company's data warehouse and other data storage tools, understanding the definition, context, and proper use of all attributes and metrics. 6. Creating dashboards based on business requirements. 7. Distributed systems, Scala, cloud, Caching, CI/CD (Continuous integration and deployment), Distributed logging, Data pipeline, Recommendation Engine, Data at Rest Encryption What To Bring: 1. Graduate/Post Graduate degree in Computer Science or Engineering. 2. 1-3 years of hands-on experience with AWS Open Search v1.0 or Elastic Search 7.9 3. 3+ years of work experience on Scala 4. Must be able to drive, design, code, review the work, and assist the teams 5. Good problem-solving skills 6. Good oral and written communication in English 7. Should be open to/have experience of working in a fast-paced delivery environment 8. Strong understanding of object-oriented design, data structures, algorithms, profiling, and optimization. 9. Good to have experience on Elasticsearch and Spark-Elasticsearch 10. Knowledge of Garbage Collection and experience in GC tuning. 11. Knowledge of algorithms like sorting, heap/stack, queue, search, etc. 12. Experience with Git and build tools like Gradle/Maven/SBT 13. Should be able to write complex queries independently. 14. Strong knowledge of programming languages like Python, Scala, etc. 15. Ability to work independently and take ownership of things. 16. An analytical mindset and strong attention to detail. 17. Good verbal & written communication skills for coordinating across teams. Who can apply: Only those candidates can apply who: have minimum 1 years of experience are Computer Science Engineering students Salary: ₹ 3,00,000 - 5,00,000 /year Experience: 1 year(s) Deadline: 2025-06-05 23:59:59 Other perks: 5 days a week Skills required: Python, Selenium, Machine Learning, REST API, Data Extraction, Data Engineering and LLMOps About Company: Taiyo is a Silicon Valley startup that aggregates, predicts, and visualizes the world's data so customers don't have to. We are a globally-distributed team with a focus on the infrastructure vertical. The Taiyo team was founded by an interdisciplinary group of experts from Stanford University's AI Institute, World Bank, International Monetary Fund, and UC Berkeley.

Posted 3 months ago

Apply

5 - 13 years

11 - 16 Lacs

Gurgaon

Work from Office

Requirement s analysis, conception, implementation/development of solution as per requirement. Create HLD and then convert them to LLD. Data extraction from SAP and non-SAP systems, data modelling and reports delivery. Work closely with Project leaders, Business teams, SAP functional counterparts to Architect, Design, Develop SAP BW4 HANA Solutions Understand the integration and consumption of BW data models/repots with other Tools Job Description: Hands-on experience in SAP BW/4HANA or SAP BW ON HANA and strong understanding of usage of objects like Composite providers, Open ODS view, advanced DSOs, Transformations, exposing BW models as HANA views, mixed scenarios, and performance optimization concepts such as data tiering optimization. 5-10 years experience on SAP BW. Experience in integration of BW with various SAP and Non-SAP backend systems/sources of data and good knowledge of different data acquisition techniques in BW/4HANA. knowledge of available SAP BW/4HANA tools and its usage like BW/4HANA Web Cockpit. full life cycle Implementation experience in SAP BW4HANA or SAP BW on HANA Hands on Experience in data extraction using standard or generic data sources. Good Knowledge of data source enhancement Strong experience in writing ABAP/AMDP code for exits, Transformation. Strong understanding CKF, RKF, Formula, Selections, Variables and other components used for reporting. Understanding of LSA/LSA++ architecture and its development standards. Good understanding of BW4 Application and Database security concepts. Functional Knowledge of various Modules like SD, MM, FI. Mandatory skill sets: SAP BW Hana Preferred skill sets: SAP BW Hana Years of experience required: 9-13 Years Education qualification: B.tech/M.tech/MBA/MCA

Posted 3 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies