Jobs
Interviews

109 Presto Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 6.0 years

0 Lacs

hyderabad, telangana

On-site

As a Software Engineer (A2) at our company, you will be responsible for designing and developing AI-driven data ingestion frameworks and real-time processing solutions to enhance data analysis and machine learning capabilities across the full technology stack. Your key responsibilities will include: - Deploying, maintaining, and supporting application codes and machine learning models in production environments, ensuring seamless integration with front-end and back-end systems. - Creating and enhancing AI solutions that facilitate the seamless integration and flow of data across the data ecosystem, enabling advanced analytics and insights for end users. - Conducting business analysis to gather requirements and develop ETL processes, scripts, and machine learning pipelines that meet technical specifications and business needs, utilizing both server-side and client-side technologies. - Developing real-time data ingestion and stream-analytic solutions utilizing technologies such as Kafka, Apache Spark, Python, and cloud platforms to support AI applications. - Utilizing multiple programming languages and tools to build prototypes for AI models and evaluate their effectiveness and feasibility. - Ensuring compliance with data governance policies by implementing and validating data lineage, quality checks, and data classification in AI projects. - Understanding and following the company's software development lifecycle to effectively develop, deploy, and deliver AI solutions. Qualifications Required: - Strong proficiency in Python, Java, C++, and familiarity with machine learning frameworks such as TensorFlow and PyTorch. - In-depth knowledge of ML, Deep Learning, and NLP algorithms. - Hands-on experience in building backend services with frameworks like FastAPI, Flask, Django, etc. - Proficiency in front-end and back-end technologies, including JavaScript frameworks like React and Angular. - Ability to develop and maintain data pipelines for AI applications, ensuring efficient data extraction, transformation, and loading processes. Additionally, you should have a Bachelor's or Master's degree in Computer Science and 2 to 4 years of Software Engineering experience. Please note that certifications such as Microsoft Certified: Azure Data Engineer Associate or Azure AI Engineer would be a plus. Your collaborative learning attitude, project responsibility skills, and business acumen will be valuable assets in this role.,

Posted 1 day ago

Apply

5.0 - 10.0 years

9 - 19 Lacs

chennai

Work from Office

Job Summary: We are seeking a Big Data Administrator with strong expertise in Linux systems, AWS infrastructure, and Big Data technologies. This role is ideal for someone experienced in managing large-scale Hadoop ecosystems in production, with a deep understanding of observability, performance tuning, and automation using tools like Terraform or Ansible. Key Responsibilities: Manage and maintain large-scale Big Data clusters (Cloudera, Hortonworks, or AWS EMR) Develop and support infrastructure as code using Terraform or Ansible Administer Hadoop ecosystem components HDFS, YARN, Hive (Tez, LLAP), Presto, Spark Implement and monitor observability tools like Prometheus, InfluxDB, Dynatrace, Grafana, Splunk Optimize SQL performance on Hive/Spark and understand query plans Automate cluster operations using Python (PySpark) or Shell scripting Support Data Analysts & Scientists with tools like JupyterHub, R-Studio, H2O, SAS Handle data in various formats ORC, Parquet, Avro Integrate with and support Kubernetes-based environments (if applicable) Collaborate across teams for deployments, monitoring, and troubleshooting Must-Have Skills: 5+ years in Linux system administration and AWS cloud infrastructure Experience with Cloudera, Hortonworks, or EMR in production Strong in Terraform Ansible for automation Solid hands-on with HDFS, YARN, Hive, Spark, Presto Proficient in Python and Shell scripting Familiar with observability tools : Grafana, Prometheus, InfluxDB, Splunk, Dynatrace Familiarity with Active Directory , Windows VDI platforms (Citrix, AWS Workspaces) Nice-to-Have Skills: Experience with Airflow , Oozie Familiar with Pandas, Numpy, Scipy, PyTorch Prior use of Jenkins, Chef, Packer Comfortable reading code in Java, Scala, Python, R Qualifications: Bachelors or Masters degree in Computer Science, Information Technology, or a related field Strong communication, collaboration, and troubleshooting skills Ability to thrive in remote or hybrid work environments

Posted 1 day ago

Apply

5.0 - 10.0 years

9 - 19 Lacs

hyderabad

Work from Office

Job Summary: We are seeking a Big Data Administrator with strong expertise in Linux systems, AWS infrastructure, and Big Data technologies. This role is ideal for someone experienced in managing large-scale Hadoop ecosystems in production, with a deep understanding of observability, performance tuning, and automation using tools like Terraform or Ansible. Key Responsibilities: Manage and maintain large-scale Big Data clusters (Cloudera, Hortonworks, or AWS EMR) Develop and support infrastructure as code using Terraform or Ansible Administer Hadoop ecosystem components HDFS, YARN, Hive (Tez, LLAP), Presto, Spark Implement and monitor observability tools like Prometheus, InfluxDB, Dynatrace, Grafana, Splunk Optimize SQL performance on Hive/Spark and understand query plans Automate cluster operations using Python (PySpark) or Shell scripting Support Data Analysts & Scientists with tools like JupyterHub, R-Studio, H2O, SAS Handle data in various formats ORC, Parquet, Avro Integrate with and support Kubernetes-based environments (if applicable) Collaborate across teams for deployments, monitoring, and troubleshooting Must-Have Skills: 5+ years in Linux system administration and AWS cloud infrastructure Experience with Cloudera, Hortonworks, or EMR in production Strong in Terraform Ansible for automation Solid hands-on with HDFS, YARN, Hive, Spark, Presto Proficient in Python and Shell scripting Familiar with observability tools : Grafana, Prometheus, InfluxDB, Splunk, Dynatrace Familiarity with Active Directory , Windows VDI platforms (Citrix, AWS Workspaces) Nice-to-Have Skills: Experience with Airflow , Oozie Familiar with Pandas, Numpy, Scipy, PyTorch Prior use of Jenkins, Chef, Packer Comfortable reading code in Java, Scala, Python, R Qualifications: Bachelors or Masters degree in Computer Science, Information Technology, or a related field Strong communication, collaboration, and troubleshooting skills Ability to thrive in remote or hybrid work environments

Posted 1 day ago

Apply

5.0 - 10.0 years

9 - 19 Lacs

bengaluru

Work from Office

Job Summary: We are seeking a Big Data Administrator with strong expertise in Linux systems, AWS infrastructure, and Big Data technologies. This role is ideal for someone experienced in managing large-scale Hadoop ecosystems in production, with a deep understanding of observability, performance tuning, and automation using tools like Terraform or Ansible. Key Responsibilities: Manage and maintain large-scale Big Data clusters (Cloudera, Hortonworks, or AWS EMR) Develop and support infrastructure as code using Terraform or Ansible Administer Hadoop ecosystem components HDFS, YARN, Hive (Tez, LLAP), Presto, Spark Implement and monitor observability tools like Prometheus, InfluxDB, Dynatrace, Grafana, Splunk Optimize SQL performance on Hive/Spark and understand query plans Automate cluster operations using Python (PySpark) or Shell scripting Support Data Analysts & Scientists with tools like JupyterHub, R-Studio, H2O, SAS Handle data in various formats ORC, Parquet, Avro Integrate with and support Kubernetes-based environments (if applicable) Collaborate across teams for deployments, monitoring, and troubleshooting Must-Have Skills: 5+ years in Linux system administration and AWS cloud infrastructure Experience with Cloudera, Hortonworks, or EMR in production Strong in Terraform Ansible for automation Solid hands-on with HDFS, YARN, Hive, Spark, Presto Proficient in Python and Shell scripting Familiar with observability tools : Grafana, Prometheus, InfluxDB, Splunk, Dynatrace Familiarity with Active Directory , Windows VDI platforms (Citrix, AWS Workspaces) Nice-to-Have Skills: Experience with Airflow , Oozie Familiar with Pandas, Numpy, Scipy, PyTorch Prior use of Jenkins, Chef, Packer Comfortable reading code in Java, Scala, Python, R Qualifications: Bachelors or Masters degree in Computer Science, Information Technology, or a related field Strong communication, collaboration, and troubleshooting skills Ability to thrive in remote or hybrid work environments

Posted 1 day ago

Apply

7.0 - 10.0 years

25 - 32 Lacs

bengaluru

Work from Office

Position Overview: We seek a highly skilled and experienced Data Engineering Lead to join our team. This role demands deep technical expertise in Apache Spark, Hive, Trino (formerly Presto), Python, AWS Glue, and the broader AWS ecosystem. The ideal candidate will possess strong hands-on skills and the ability to design and implement scalable data solutions, optimise performance, and lead a high-performing team to deliver data-driven insights. Key Responsibilities: Technical Leadership Lead and mentor a team of data engineers, fostering best practices in coding, design, and delivery. Drive the adoption of modern data engineering frameworks, tools, and methodologies to ensure high-quality and scalable solutions. Translate complex business requirements into effective data pipelines, architectures, and workflows. Data Pipeline Development Architect, develop, and optimize scalable ETL/ELT pipelines using Apache Spark, Hive, AWS Glue, and Trino. Handle complex data workflows across structured and unstructured data sources, ensuring performance and cost-efficiency. Develop real-time and batch processing systems to support business intelligence, analytics, and machine learning applications. Cloud & Infrastructure Management Build and maintain cloud-based data solutions using AWS services like S3, Athena, Redshift, EMR, DynamoDB, and Lambda. Design and implement federated query capabilities using Trino for diverse data sources. Manage Hive Metastore for schema and metadata management in data lakes. Performance Optimization Optimize Apache Spark jobs and Hive queries for performance, ensuring efficient resource utilization and minimal latency. Implement caching and indexing strategies to accelerate query performance in Trino. Continuously monitor and improve system performance through diagnostics and tuning. Collaboration & Stakeholder Engagement Work closely with data scientists, analysts, and business teams to understand requirements and deliver actionable insights. Ensure that data infrastructure aligns with organizational goals and compliance standards. Data Governance & Quality Establish and enforce data quality standards, governance practices, and monitoring processes. Ensure data security, privacy, and compliance with regulatory frameworks. Innovation & Continuous Learning Stay ahead of industry trends, emerging technologies, and best practices in data engineering. Proactively identify and implement improvements in data architecture and processes. Qualifications: Required Technical Expertise Advanced proficiency with Apache Spark (core, SQL, streaming) for large-scale data processing. Strong expertise in Hive for querying and managing structured data in data lakes. In-depth knowledge of Trino (Presto) for federated querying and high-performance SQL execution. Solid programming skills in Python with frameworks like PySpark and Pandas. Hands-on experience with AWS Glue, including Glue ETL jobs, Glue Data Catalog, and Glue Crawlers. Deep understanding of data formats such as Parquet, ORC, Avro, and their use cases. Cloud Proficiency Expertise in AWS services, including S3, Redshift, Athena, EMR, DynamoDB, and IAM. Experience designing scalable and cost-efficient cloud-based data solutions. Performance Tuning Strong ability to optimize Apache Spark jobs, Hive queries, and Trino workloads for distributed environments. Experience with advanced techniques like partitioning, bucketing, and query plan optimization. Leadership & Collaboration Proven experience leading and mentoring data engineering teams. Strong communication skills, with the ability to interact with technical and non-technical stakeholders effectively. Education & Experience Bachelors or Masters degree in Computer Science, Data Engineering, or a related field. 8+ years of experience in data engineering with a minimum of 2 years in a leadership role. Qualifications: 8+ years of experience in building data pipelines from scratch in large data volume environments AWS certifications, such as AWS Certified Data Analytics or AWS Certified Solutions Architect. Experience with Kafka or Kinesis for real-time data streaming would be a plus. Familiarity with containerization tools like Docker and orchestration platforms like Kubernetes. Knowledge of CI/CD pipelines and DevOps practices for data engineering. Prior experience with data lake architectures and integrating ML workflows. Mandatory Key SkillsCI/CD,DevOps,data engineering,Apache Spark jobs,Hive queries,Performance Tuning,AWS Glue,Data Governance,AWS*,Spark*,Python*,Hive*,ETL*

Posted 1 day ago

Apply

0.0 - 5.0 years

7 - 12 Lacs

noida

Work from Office

Sr Starburst Architect to get the WLT instance that is being procured up and running and providing hands-on optimization for the environments. Good understanding of Jira/Confluence and GitHub Strong troubleshooting and problem-solving skills Strong written, verbal communication/collaboration skills and ability to manage user/vendor/ customer and to work with cross functional teams Collaborate with multiple stakeholders for tech solutioning and bottlenecks removal Ability to assess the impact of production issues & escalate to relevant technology partners and get it resolved at the earliest during extremely time critical regulatory commitments Ability to operate with a limited level of direct supervision & in a time sensitive environment Preferred: Has an understanding of finance/accounting domain Mandatory Competencies Query Accelerators and Data Virtualization - Query Accelerators and Data Virtualization - Starburst Presto DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - GitLab,Github, Bitbucket Development Tools and Management - Development Tools and Management - JIRA Beh - Communication and collaboration.

Posted 1 day ago

Apply

14.0 - 20.0 years

0 Lacs

maharashtra

On-site

As a Senior Architect - Data & Cloud at our company, you will be responsible for architecting, designing, and implementing end-to-end data pipelines and data integration solutions for varied structured and unstructured data sources and targets. You will need to have more than 15 years of experience in Technical, Solutioning, and Analytical roles, with 5+ years specifically in building and managing Data Lakes, Data Warehouse, Data Integration, Data Migration, and Business Intelligence/Artificial Intelligence solutions on Cloud platforms like GCP, AWS, or Azure. Key Responsibilities: - Translate business requirements into functional and non-functional areas, defining boundaries in terms of Availability, Scalability, Performance, Security, and Resilience. - Architect and design scalable data warehouse solutions on cloud platforms like Big Query or Redshift. - Work with various Data Integration and ETL technologies on Cloud such as Spark, Pyspark/Scala, Dataflow, DataProc, EMR, etc. - Deep knowledge of Cloud and On-Premise Databases like Cloud SQL, Cloud Spanner, Big Table, RDS, Aurora, DynamoDB, Oracle, Teradata, MySQL, DB2, SQL Server, etc. - Exposure to No-SQL databases like Mongo dB, CouchDB, Cassandra, Graph dB, etc. - Experience in using traditional ETL tools like Informatica, DataStage, OWB, Talend, etc. - Collaborate with internal and external stakeholders to design optimized data analytics solutions. - Mentor young talent within the team and contribute to building assets and accelerators. Qualifications Required: - 14-20 years of relevant experience in the field. - Strong understanding of Cloud solutions for IaaS, PaaS, SaaS, Containers, and Microservices Architecture and Design. - Experience with BI Reporting and Dashboarding tools like Looker, Tableau, Power BI, SAP BO, Cognos, Superset, etc. - Knowledge of Security features and Policies in Cloud environments like GCP, AWS, or Azure. - Ability to compare products and tools across technology stacks on Google, AWS, and Azure Cloud. In this role, you will lead multiple data engagements on GCP Cloud for data lakes, data engineering, data migration, data warehouse, and business intelligence. You will interface with multiple stakeholders within IT and business to understand data requirements and take complete responsibility for the successful delivery of projects. Additionally, you will have the opportunity to work in a high-growth startup environment, contribute to the digital transformation journey of customers, and collaborate with a diverse and proactive team of techies. Please note that flexible, remote working options are available to foster productivity and work-life balance.,

Posted 2 days ago

Apply

14.0 - 20.0 years

0 Lacs

maharashtra

On-site

Role Overview: As a Principal Architect - Data & Cloud at Quantiphi, you will be responsible for leveraging your extensive experience in technical, solutioning, and analytical roles to architect and design end-to-end data pipelines and data integration solutions for structured and unstructured data sources and targets. You will play a crucial role in building and managing data lakes, data warehouses, data integration, and business intelligence/artificial intelligence solutions on Cloud platforms like GCP, AWS, and Azure. Your expertise will be instrumental in designing scalable data warehouse solutions on Big Query or Redshift and working with various data integration, storage, and pipeline tools on Cloud. Additionally, you will serve as a trusted technical advisor to customers, lead multiple data engagements on GCP Cloud, and contribute to the development of assets and accelerators. Key Responsibilities: - Possess more than 15 years of experience in technical, solutioning, and analytical roles - Have 5+ years of experience in building and managing data lakes, data warehouses, data integration, and business intelligence/artificial intelligence solutions on Cloud platforms like GCP, AWS, and Azure - Ability to understand business requirements, translate them into functional and non-functional areas, and define boundaries in terms of availability, scalability, performance, security, and resilience - Architect, design, and implement end-to-end data pipelines and data integration solutions for structured and unstructured data sources and targets - Work with distributed computing and enterprise environments like Hadoop and Cloud platforms - Proficient in various data integration and ETL technologies on Cloud such as Spark, Pyspark/Scala, Dataflow, DataProc, EMR, etc. - Deep knowledge of Cloud and On-Premise databases like Cloud SQL, Cloud Spanner, Big Table, RDS, Aurora, DynamoDB, Oracle, Teradata, MySQL, DB2, SQL Server, etc. - Exposure to No-SQL databases like Mongo dB, CouchDB, Cassandra, Graph dB, etc. - Design scalable data warehouse solutions on Cloud with tools like S3, Cloud Storage, Athena, Glue, Sqoop, Flume, Hive, Kafka, Pub-Sub, Kinesis, Dataflow, DataProc, Airflow, Composer, Spark SQL, Presto, EMRFS, etc. - Experience with Machine Learning Frameworks like TensorFlow, Pytorch - Understand Cloud solutions for IaaS, PaaS, SaaS, Containers, and Microservices Architecture and Design - Good understanding of BI Reporting and Dashboarding tools like Looker, Tableau, Power BI, SAP BO, Cognos, Superset, etc. - Knowledge of security features and policies in Cloud environments like GCP, AWS, Azure - Work on business transformation projects for moving On-Premise data solutions to Cloud platforms - Serve as a trusted technical advisor to customers and solutions for complex Cloud and Data-related technical challenges - Be a thought leader in architecture design and development of cloud data analytics solutions - Liaise with internal and external stakeholders to design optimized data analytics solutions - Collaborate with SMEs and Solutions Architects from leading cloud providers to present solutions to customers - Support Quantiphi Sales and GTM teams from a technical perspective in building proposals and SOWs - Lead discovery and design workshops with potential customers globally - Design and deliver thought leadership webinars and tech talks with customers and partners - Identify areas for productization and feature enhancement for Quantiphi's product assets Qualifications Required: - Bachelor's or Master's degree in Computer Science, Information Technology, or related field - 14-20 years of experience in technical, solutioning, and analytical roles - Strong expertise in building and managing data lakes, data warehouses, data integration, and business intelligence/artificial intelligence solutions on Cloud platforms like GCP, AWS, and Azure - Proficiency in various data integration, ETL technologies on Cloud, and Cloud and On-Premise databases - Experience with Cloud solutions for IaaS, PaaS, SaaS, Containers, and Microservices Architecture and Design - Knowledge of BI Reporting and Dashboarding tools and security features in Cloud environments Additional Company Details: While technology is the heart of Quantiphi's business, the company attributes its success to its global and diverse culture built on transparency, diversity, integrity, learning, and growth. Working at Quantiphi provides you with the opportunity to be part of a culture that encourages innovation, excellence, and personal growth, fostering a work environment where you can thrive both professionally and personally. Joining Quantiphi means being part of a dynamic team of tech enthusiasts dedicated to translating data into tangible business value for clients. Flexible remote working options are available to promote productivity and work-life balance. ,

Posted 4 days ago

Apply

1.0 - 6.0 years

15 - 25 Lacs

bengaluru

Work from Office

We have developed API gateway aggregators using frameworks like Hystrix and spring-cloud-gateway for circuit breaking and parallel processing. Our serving microservices handle more than 15K RPS on normal days and during saledays this can go to 30K RPS. Being a consumer app, these systems have SLAs of ~10ms Our distributed scheduler tracks more than 50 million shipments periodically fromdifferent partners and does async processing involving RDBMS. We use an in-house video streaming platform to support a wide variety of devices and networks. What Youll Do Design and implement scalable and fault-tolerant data pipelines (batch and streaming) using frameworks like Apache Spark , Flink , and Kafka . Lead the design and development of data platforms and reusable frameworks that serve multiple teams and use cases. Build and optimize data models and schemas to support large-scale operational and analytical workloads. Deeply understand Apache Spark internals and be capable of modifying or extending the open-source Spark codebase as needed. Develop streaming solutions using tools like Apache Flink , Spark Structured Streaming . Drive initiatives that abstract infrastructure complexity , enabling ML, analytics, and product teams to build faster on the platform. Champion a platform-building mindset focused on reusability , extensibility , and developer self-service . Ensure data quality, consistency, and governance through validation frameworks, observability tooling, and access controls. Optimize infrastructure for cost, latency, performance , and scalability in modern cloud-native environments . Mentor and guide junior engineers , contribute to architecture reviews, and uphold high engineering standards. Collaborate cross-functionally with product, ML, and data teams to align technical solutions with business needs. What Were Looking For 5-8 years of professional experience in software/data engineering with a focus on distributed data systems . Strong programming skills in Java , Scala , or Python , and expertise in SQL . At least 2 years of hands-on experience with big data systems including Apache Kafka , Apache Spark/EMR/Dataproc , Hive , Delta Lake , Presto/Trino , Airflow , and data lineage tools (e.g., Datahb,Marquez, OpenLineage). Experience implementing and tuning Spark/Delta Lake/Presto at terabyte-scale or beyond. Strong understanding of Apache Spark internals (Catalyst, Tungsten, shuffle, etc.) with experience customizing or contributing to open-source code. Familiarity and worked with modern open-source and cloud-native data stack components such as: Apache Iceberg , Hudi , or Delta Lake Trino/Presto , DuckDB , or ClickHouse,Pinot ,Druid Airflow , Dagster , or Prefect DBT , Great Expectations , DataHub , or OpenMetadata Kubernetes , Terraform , Docker Strong analytical and problem-solving skills , with the ability to debug complex issues in large-scale systems. Exposure to data security, privacy, observability , and compliance frameworks is a plus. Good to Have Contributions to open-source projects in the big data ecosystem (e.g., Spark, Kafka, Hive, Airflow) Hands-on data modeling experience and exposure to end-to-end data pipeline development Familiarity with OLAP data cubes and BI/reporting tools such as Tableau, Power BI, Superset, or Looker Working knowledge of tools and technologies like ELK Stack (Elasticsearch, Logstash, Kibana) , Redis , and MySQL Exposure to backend technologies including RxJava , Spring Boot , and Microservices architecture

Posted 5 days ago

Apply

2.0 - 6.0 years

0 Lacs

maharashtra

On-site

As a passionate Software Engineer with a proven track record of solving complex problems and driving innovation, you will have the opportunity to work at ReliaQuest and be part of a team that is reshaping the future of threat detection and response. Your role will involve developing cutting-edge security technology, creating REST APIs, and integrating various products to enhance our customers" threat detection capabilities. By working closely with talented individuals, you will contribute directly to the growth and success of our organization. Your responsibilities will include researching and developing solutions across a range of advanced technologies, managing deployment processes, conducting code reviews, and automating software development lifecycle stages. Collaboration with internal and external stakeholders will be crucial to ensure the effective utilization of our products. Additionally, you will support your team members and foster a culture of continuous collaboration. To excel in this role, you should have 2-4 years of software development experience in languages such as Python, JavaScript, React, Angular, Java, C#, MySQL, Elastic Search, or equivalent. Proficiency in both written and verbal English is essential for effective communication. What sets you apart is hands-on experience with technologies like Elasticsearch, Kafka, Apache Spark, Logstash, Hadoop/Hive, Tensorflow, Kibana, Athena/Presto/BigTable, Angular, and React. Familiarity with cloud platforms such as AWS, GCP, or Azure, as well as knowledge of unit testing, continuous integration, deployment practices, and Agile Methodology, will be advantageous. Higher education or relevant certifications will further distinguish you as a candidate.,

Posted 1 week ago

Apply

4.0 - 6.0 years

1 - 2 Lacs

gurugram

Work from Office

Experience in Big Data technologies, specifically Spark, Python, Hive, SQL, Presto (or other query engines), big data storage formats (e.g., Parquet), orchestration tools (e.g., Apache Airflow) and version control (e.g. Bitbucket) Proficiency in developing configuration-based ETL pipelines and user-interface driven tools to optimize data processes and calculations (e.g., Dataiku). Experience in analysing business requirements, solution design, including the design of data models, data pipelines, and calculations, as well as presenting solution options and recommendations. Experience working in a cloud-based environment (ideally AWS), with a solid understanding of cloud computing concepts (EC2, S3), Linux, and containerization technologies (Docker and Kubernetes). A background in solution delivery within the finance or treasury business domains, particularly in areas such as Liquidity or Capital, is advantageous. Additional pointers: We are seeking for a mid-level engineer to design and build Liquidity calculations using our bespoke Data Calculation Platform (DCP), based on documented business requirements. The front-end of the DCP is Dataiku, but prior experience with Dataiku is not necessary if they've had experience working as a data engineer. Liquidity experience is not necessary, but it would be helpful if they've had experience designing and building to business requirements. There is a requirement to work 3 days in the office in Gurugram. They will work as part of a team that is located in both Sydney and Gurugram. The reporting manager will be based in Gurugram and project leadership in Sydney.

Posted 1 week ago

Apply

6.0 - 11.0 years

0 Lacs

hyderabad, pune

Work from Office

Strong technical acumen in AWS including expert working knowledge of AWS services such as Glue, ,EC2, ECS, Lamda, Step Functions, IAM, Athena, HUE, Presto, S3, Redshift etc. Strong technical acumen in Data Engineering enablement, and working knowledge of frameworks/languages such as Python, SPARK etc. Dremino knowledge is a plus.

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

hyderabad, telangana

On-site

You have a great opportunity to join Everest DX, a Digital Platform Services company headquartered in Stamford. Our vision is to enable Digital Transformation for enterprises by delivering seamless customer experience, business efficiency, and actionable insights through an integrated set of futuristic digital technologies. As a Data Engineer with a minimum of 6+ years of experience, you will be responsible for SQL optimization and performance tuning. You should have hands-on development experience in programming languages like Python, PySpark, Scala, etc. Additionally, you should possess at least 6+ years of cloud data engineering experience in Azure. Your role will involve working with Azure Cloud and its components such as Azure Data Factory, Azure Blob Storage, Azure Data Flows, Presto, Azure Databricks, and Azure Key Vault. You should have expertise in Data Warehousing, including S/4 Hana, ADLS, Teradata/SQL, and Data Analytics tools. Proficiency in preparing complex SQL queries, views, and stored procedures for data loading is essential. It is important to have a good understanding of SAP business processes and master data. Exposure to SAP BW, native Hana, BW/4HANA, Teradata, and various ETL and data ingestion technologies is preferred. Fluent with Azure cloud services and Azure Certification would be an added advantage. Experience in integrating multi-cloud services with on-premises technologies, data modeling, data warehousing, and building high-volume ETL/ELT pipelines is required. You should also have experience in building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Familiarity with running and scaling applications on cloud infrastructure, containerized services like Kubernetes, version control systems like Github, deployment & CI tools, Azure Data Factory, Azure Databricks, Python, and business intelligence tools such as PowerBI would be beneficial. If you are passionate about data engineering, cloud technologies, and digital transformation, this role at Everest DX in Hyderabad could be the perfect fit for you. Join us in driving innovation and excellence in the digital realm.,

Posted 1 week ago

Apply

6.0 - 11.0 years

15 - 19 Lacs

bengaluru

Work from Office

Project description We've been engaged by an Investment Banking client in the Corporate, Commercial and Institutional Banking (CCIB) potfolio to work on the projects for managing analytics support for all the key initiatives of all segments/products . Responsibilities Choosing the right technologies for our use cases, deploy and operate. Setting up Data stores structured, semi structured and non-structured. Secure data at rest via encryption Implement tool to access securely multiple data sources Implement solutions to run real-time analytics Use container technologies Skills Must have 6+ years of relevant technology experience Experience in one of the followingElastic Search, Cassandra, Hadoop, Mongo DB Experience in Spark and Presto/Trino Experience with microservice based architectures Experience on Kubernetes Nice to have Experience of Unix/Linux environments is plus Experience of Agile/Scrum development methodologies is a plus Cloud knowledge a big plus (AWS/GCP) (Kubernetes/Docker) Be nice, respectful, able to work in a team

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Data Engineer, you will be responsible for building highly scalable, fault-tolerant distributed data processing systems that handle hundreds of terabytes of data daily and manage petabyte-sized data warehouses and Elasticsearch clusters. Your role will involve developing quality data solutions and simplifying existing datasets into self-service models. Additionally, you will create data pipelines that enhance data quality and are resilient to unreliable data sources. You will take ownership of data mapping, business logic, transformations, and data quality, and engage in low-level systems debugging and performance optimization on large production clusters. Your responsibilities will also include participating in architecture discussions, contributing to the product roadmap, and leading new projects. You will be involved in maintaining and supporting existing platforms, transitioning to newer technology stacks, and ensuring the evolution of the systems. To excel in this role, you must demonstrate proficiency in Python and PySpark, along with a deep understanding of Apache Spark, Spark tuning, RDD creation, and data frame building. Experience with big data technologies such as HDFS, YARN, Map-Reduce, Hive, Kafka, Spark, Airflow, and Presto is essential. Moreover, you should have expertise in building distributed environments using tools like Kafka, Spark, Hive, and Hadoop. A solid grasp of distributed database systems" architecture and functioning, as well as experience working with file formats like Parquet and Avro for handling large data volumes, will be beneficial. Familiarity with one or more NoSQL databases and cloud platforms like AWS and GCP is preferred. The ideal candidate for this role will have at least 5 years of professional experience as a data or software engineer. This position offers a challenging opportunity to work on cutting-edge technologies and contribute to the development of robust data processing systems. If you are passionate about data engineering and possess the required skills and experience, we encourage you to apply and join our dynamic team.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

The ideal candidate for this role should have a strong background in SQL, CDP (TreasureData), Python/Dig-Dag, and Presto/Sql for data engineering. It is essential to possess knowledge and hands-on experience in cloud technologies such as Microsoft Azure, AWS, ETL processes, and API integration tools. Proficiency in Python and SQL is a must, along with exposure to Big Data technologies like Presto, Hadoop, Cassandra, MongoDB, etc. Previous experience with CDP implementation using tools like Treasure Data or similar platforms such as Action IQ would be a significant advantage. Familiarity with data modelling and architecture is preferred, as well as excellent SQL and advanced SQL skills. Knowledge or experience in data visualization tools like Power BI and an understanding of AI/ML concepts would be beneficial. The candidate should hold a BE/BTech degree and be actively involved in requirements gathering, demonstrating the ability to create technical documentation and possessing strong analytical and problem-solving skills. The role entails working on the end-to-end implementation of CDP projects, participating in CDP BAU activities, Go-live cut-over, and providing day-to-day CDP application support. Automation of existing tasks and flexibility with working hours are expected, along with the ability to thrive in a process-oriented environment. Please note that the above job description is a standard summary based on the provided information.,

Posted 2 weeks ago

Apply

4.0 - 8.0 years

11 - 15 Lacs

bengaluru

Work from Office

The Boeing Company is currently seeking a high performing versatile experienced Associate Java Full Stack Developer to join the Product Support and Services Mobility and Innovation team in Bangalore India . The team provides comprehensive digitally native mobile, progressive web and innovation solutions to rapidly access and transform complex business scenarios by enabling digital products that enhance revenue opportunities and efficiencies for Boeing. Good communication skills are needed both verbally and written, to Interact with peers and customers. Job requires the ability to work well with others on a team as well as independently. Job requires working within a diverse team of skilled and motivated co-workers to collaborate on results. Other qualities for this candidate are a positive attitude, self-motivated, the ability to work in a fast-paced, demanding environment, and the ability to adapt to changing priorities. Position Responsibilities: Understands and develops software solutions to meet end user requirements. Ensures that application integrates with overall system architecture, utilizing standard IT lifecycle methodologies and tools. Develops algorithms, data and code to translate business problems of moderate complexity. Work with a global, distributed team comprising of users, stakeholders and engineers within an agile framework. Employer will not sponsor applicants for employment visa status. Basic Qualifications (Required Skills/Experience): Experience in designing and implementing idiomatic RESTful APIs using the Spring framework (v6.0+) with Spring Boot (v3.0+) and Spring Security (v6.0+) in Java (v17+). Experience with additional languages (Scala/Kotlin/others) preferred. Working experience with RDBM Systems, basic SQL scripting and querying, specifically with SQLServer (2018+) and Teradata (v17+). Additional knowledge of schema / modelling / querying optimization preferred. Experience with Typescript (v5+), JavaScript (ES6+), Angular (v15+), Material UI, AmCharts (v5+) Experience working with ALM tools (Git, Gradle, SonarQube, Coverity, Docker, Kubernetes) driven by tests (JUnit, Mockito, Hamcrest etc.) Experience in shell scripting (Bash/Sh), CI/CD processes and tools (GitLab CI/similar) OCI containers (Docker/Podman/Buildah etc) Data analysis and engineering experience with Apache Spark (v3+) in Scala, Apache Iceberg / Parquet etc. Experience with Trino/Presto is a bonus. Familiarity with GCP / Azure (VMs, container runtimes, BLOB storage solutions) preferred but not mandatory. Preferred Qualifications (Desired Skills/Experience) : Strong backend experience (Java/Scala/Kotlin etc.) with basic data analysis/engineering experience (Spark/Parquet etc.) OR basic backend experience (Java/Scala etc.) with strong data analysis/engineering experience (Spark/Parquet etc.) OR Moderate backend experience (Java/Kotlin etc.) with Strong Frontend experience (Angular 15+ with SASS / Angular Material) and exposure to DevOps pipelines (GitLab CI). Typical Education & Experience: Typically, 4 -8 years related work experience or Relevant military experience. advanced degree (eg. Bachelor,Master, etc), preferred, but not required. Relocation: This position does offer relocation within INDIA.

Posted 2 weeks ago

Apply

3.0 - 6.0 years

5 - 15 Lacs

bengaluru, delhi / ncr, mumbai (all areas)

Work from Office

Were Hiring: Financial Analyst You’ll work closely with senior finance leaders, manage financial models, dashboards, and P&L consolidation, and provide insights that shape critical business decisions. Location: Bangalore / Mumbai / Gurgaon (Remote, Hybrid option available) Shift: 5:00 PM – 2:00 AM IST (aligned to U.S. team hours) Duration: 12 Months | Level: II (3+ years experience) What You’ll Do Drive financial analysis, budgeting, forecasting & month-end reporting Consolidate revenue, headcount, and expenses into a P&L view Build & maintain dashboards (Power BI / Tableau) for leadership insights Prepare executive-ready presentations with data-driven storytelling Partner with U.S.-based leadership to deliver timely insights & variance analysis What We’re Looking For 3+ years of FP&A / financial analysis experience in an MNC Advanced Excel (financial modeling & large datasets) Strong skills in PowerPoint; SQL/Presto/Oracle EPM preferred Analytical thinker with attention to detail & ability to meet tight deadlines Collaborative team player with excellent communication skills Why Join Us? We believe in doing our best work where it works best for us. This role gives you the flexibility to work remotely or in-office (hybrid), while being part of a fast-paced, global finance team shaping the future of our business. Apply now and be part of a team that’s changing the way the world works! If interested, Please share me your updated resume to rama.c@acesoftlabs.com Regards, Rama CH Key Account Manager (KAM) Acesoft Labs

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

Flexing It is a freelance consulting marketplace that connects freelancers and independent consultants with organizations seeking independent talent. Our client, a global leader in energy management and automation, is currently seeking a Data Engineer to prepare data and make it available in an efficient and optimized format for various data consumers, including BI, analytics, and data science applications. As a Data Engineer, you will work with current technologies such as Apache Spark, Lambda & Step Functions, Glue Data Catalog, and RedShift on the AWS environment. Key Responsibilities: - Design and develop new data ingestion patterns into IntelDS Raw and/or Unified data layers based on the requirements and needs for connecting new data sources or building new data objects. Automate data pipelines to streamline the process. - Implement DevSecOps practices by automating the integration and delivery of data pipelines in a cloud environment. Design and implement end-to-end data integration tests and CICD pipelines. - Analyze existing data models, identify performance optimizations for data ingestion and consumption to accelerate data availability within the platform and for consumer applications. - Support client applications in connecting and consuming data from the platform, ensuring compliance with guidelines and best practices. - Monitor the platform, debug detected issues and bugs, and provide necessary support. Skills required: - Minimum of 3 years of prior experience as a Data Engineer with expertise in Big Data and Data Lakes in a cloud environment. - Bachelor's or Master's degree in computer science, applied mathematics, or equivalent. - Proficiency in data pipelines, ETL, and BI, regardless of the technology. - Hands-on experience with AWS services including at least 3 of: RedShift, S3, EMR, Cloud Formation, DynamoDB, RDS, Lambda. - Familiarity with Big Data technologies and distributed systems such as Spark, Presto, or Hive. - Proficiency in Python for scripting and object-oriented programming. - Fluency in SQL for data warehousing, with experience in RedShift being a plus. - Strong understanding of data warehousing and data modeling concepts. - Familiarity with GIT, Linux, CI/CD pipelines is advantageous. - Strong systems/process orientation with analytical thinking, organizational skills, and problem-solving abilities. - Ability to self-manage, prioritize tasks in a demanding environment. - Consultancy orientation and experience with the ability to form collaborative working relationships across diverse teams and cultures. - Willingness and ability to train and teach others. - Proficiency in facilitating meetings and following up with action items.,

Posted 2 weeks ago

Apply

9.0 - 13.0 years

0 Lacs

karnataka

On-site

As a Principal Data Engineer at Autodesk, you will play a crucial role in the Product Access and Compliance team, which focuses on identifying and exposing non-compliant users of Autodesk software. Your main responsibility will be to develop best practices and make architectural decisions to enhance data processing & analytics pipelines. This is a significant strategic initiative for the company, and your contributions will directly impact the platforms" reliability, resiliency, and scalability. You should be someone who excels in autonomy and has a proven track record of driving long-term projects to completion. Your attention to detail and commitment to quality will be essential in ensuring the success of the data infrastructure at Autodesk. Working with a tech stack that includes Airflow, Hive, EMR, PySpark, Presto, Jenkins, Snowflake, Datadog, and various AWS services, you will collaborate closely with the Sr. Manager, Software Development in a hybrid position based in Bengaluru. Your responsibilities will include modernizing and expanding existing systems, understanding business requirements to architect scalable solutions, developing CI/CD ETL pipelines, and owning data quality within your areas of expertise. You will also lead the design and implementation of complex data processing systems, provide mentorship to junior engineers, and ensure alignment with project goals and timelines. To qualify for this role, you should hold a Bachelor's degree with at least 9 years of relevant industry experience in big data systems, data processing, and SQL data warehouses. You must have hands-on experience with large Hadoop projects, PySpark, and optimizing Spark applications. Strong programming skills in Python & SQL, knowledge of distributed data processing systems, and experience with public cloud platforms like AWS are also required. Additionally, familiarity with SQL, data modeling techniques, ETL/ELT design, workflow management tools, and BI tools will be advantageous. At Autodesk, we are committed to creating a diverse and inclusive workplace where everyone can thrive. If you are passionate about leveraging data to drive meaningful impact and want to be part of a dynamic team that fosters continuous learning and improvement, we encourage you to apply and join us in shaping a better future for all.,

Posted 2 weeks ago

Apply

6.0 - 11.0 years

6 - 10 Lacs

hyderabad

Work from Office

We are seeking a Senior Data Engineer for our Marketing team in Thomson Reuters. Design and develop our data transformation initiatives as we build the data foundation to drive our marketing strategy to enhance our internal and external customer experiences and personalization. This is a mission-critical role with substantial scope, complexity, executive visibility, and has a large opportunity for impact. You will play a critical role in ensuring that customer data is effectively managed and utilized to drive business insights and facilitating informed decision-making and help Thomson Reuters rapidly scale our digital customer experiences. About the Role In this role as a Senior Data Engineer , you will: Independently own and manage assigned projects and meet deadlines, clearly communicating progress and barriers to manager and stakeholders. Serve as a visible Subject Matter Expert on our Customer Data Platform, maintaining up-to-date awareness of industry trends, cutting-edge technologies, and best practices on relevant topics including unified customer profiles, deterministic and probabilistic matching, identity graphs, data enrichment, etc. Design and implement data ingestion pipelines to collect and ingest customer data into the Customer Data Platform from various sources. This involves setting up data pipelines, APIs, and ETL (Extract, Transform, Load) processes. Create and design data models, schemas, and database structures in Snowflake and the Customer Data Platform. Carry out comprehensive data analysis from various system sources to yield enhanced insights into customer behavior and preferences. Gather and analyze data from various touchpoints, including online interactions, transactional systems, and customer feedback channels, creating a comprehensive customer profile that presents a 360-degree view. Ensure the launch of new data, segmentation, and profile capabilities, as well as the evolutions of the platform, go smoothly. This includes testing, post-launch monitoring, and overall setup for long-term success. Collaborate with marketers and other stakeholders to understand their data needs and translate those needs into technical requirements. Actively identify and propose innovations in data practices that evolve capabilities, improve efficiency or standardization, and better support stakeholders. Shift Timings: 2 PM to 11 PM (IST). Work from office for 2 days in a week (Mandatory). About You Youre a fit for the role of Senior Data Engineer, if your background includes: Bachelors or masters degree in data science, business, technology, or an equivalent field. Strong Data Engineering background with 6+ years of experience working on large data transformation projects, related to customer data platforms, Identity Resolution, and Identity Graphs. Solid foundation in SQL and familiarity with other query engines, along with hands-on experience with Snowflake, AWS Cloud, DBT, and Real-time APIs. Expertise in using Presto for querying data across multiple sources and Digdag for workflow management, including the ability to create, schedule, and monitor data workflows. Proficient in configuring and implementing any industry-leading customer data platform, including data integration, segmentation, and activations is a must. Experience using marketing data sources such as CRM especially Salesforce, marketing automation platform especially Eloqua, web tracking Adobe Analytics is a plus. Exposure to Gen AI, capable of leveraging AI solutions to address complex data challenges. Excellent oral, written, and visual (Power point slides) communication skills, especially in breaking down complex information into understandable pieces, telling stories with data, and translating technical concepts for non-technical audiences. Strong ability to organize, prioritize, and complete tasks with a high attention to detail, even in the face of ambiguity and environmental barriers. Knowledge of marketing or digital domains and of professional services industry, especially legal, tax, and accounting is a plus. Experience in working in iterative development and a solid grasp of agile practices. #LI-GS2

Posted 2 weeks ago

Apply

5.0 - 8.0 years

4 - 7 Lacs

bengaluru

Work from Office

About The Role Skill required: Delivery - Marketing Analytics and Reporting Designation: I&F Decision Sci Practitioner Sr Analyst Qualifications: Any Graduation Years of Experience: 5 to 8 years What would you do? Data & AIAnalytical processes and technologies applied to marketing-related data to help businesses understand and deliver relevant experiences for their audiences, understand their competition, measure and optimize marketing campaigns, and optimize their return on investment. What are we looking for? Data Analytics - with a specialization in the marketing domain*Domain Specific skills* Familiarity with ad tech and B2B sales*Technical Skills* Proficiency in SQL and Python Experience in efficiently building, publishing & maintaining robust data models & warehouses for self-ser querying, advanced data science & ML analytic purposes Experience in conducting ETL / ELT with very large and complicated datasets and handling DAG data dependencies. Strong proficiency with SQL dialects on distributed or data lake style systems (Presto, BigQuery, Spark/Hive SQL, etc.), including SQL-based experience in nested data structure manipulation, windowing functions, query optimization, data partitioning techniques, etc. Knowledge of Google BigQuery optimization is a plus. Experience in schema design and data modeling strategies (e.g. dimensional modeling, data vault, etc.) Significant experience with dbt (or similar tools), Spark-based (or similar) data pipelines General knowledge of Jinja templating in Python. Hands-on experience with cloud provider integration and automation via CLIs and APIs*Soft Skills* Ability to work well in a team Agility for quick learning Written and verbal communication Roles and Responsibilities: In this role you are required to do analysis and solving of increasingly complex problems Your day-to-day interactions are with peers within Accenture You are likely to have some interaction with clients and/or Accenture management You will be given minimal instruction on daily work/tasks and a moderate level of instruction on new assignments Decisions that are made by you impact your own work and may impact the work of others In this role you would be an individual contributor and/or oversee a small work effort and/or team Please note that this role may require you to work in rotational shifts Qualification Any Graduation

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

Join our Transformation Liquidity Engineering team in Gurugram and collaborate with a peer network of technologists to play a key role in implementing critical liquidity calculations, creating data visualisations, and delivering data to downstream systems. At Macquarie, we bring together diverse people and empower them to shape various possibilities. As a global financial services group operating in 34 markets with 55 years of unbroken profitability, you will be part of a friendly and supportive team where everyone contributes ideas and drives outcomes regardless of their role. In this role, you will leverage your deep understanding of big data technologies and enthusiasm for DevOps practices. You will be responsible for the full lifecycle of your data assets, from design and development to deployment and support, while also establishing templates, methods, and standards for implementation. Managing deadlines, articulating technical problems and ideas, and contributing to building better processes and practices will be key expectations. Your growth mindset, passion for learning, and ability to quickly adapt to innovative technologies will be crucial to your success. Key Requirements: - 5+ years of hands-on experience in building, implementing, and enhancing enterprise-scale data platforms. - Proficiency in big data with expertise in Spark, Python, Hive, SQL, Presto, storage formats like Parquet, and orchestration tools such as Apache Airflow. - Skilled in developing configuration-based ETL pipelines and UI-driven tools for data process and calculation optimization (e.g., Dataiku). - Proficient in creating data visualizations with Power BI and related data models. - Knowledgeable in cloud environments (preferably AWS), with an understanding of EC2, S3, Linux, Docker, and Kubernetes. - Background in finance or treasury, especially in Liquidity or Capital, is preferred. We welcome individuals inspired to build a better future with us. If you are excited about the role or working at Macquarie, we encourage you to apply. About Technology: Technology enables every aspect of Macquarie, for our people, our customers, and our communities. We are a global team passionate about accelerating the digital enterprise, connecting people and data, building platforms and applications, and designing tomorrow's technology solutions. Our commitment to diversity, equity, and inclusion: We aim to provide reasonable adjustments to individuals who may need support during the recruitment process and through working arrangements. If you require additional assistance, please let us know in the application process.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

delhi

On-site

Findem is the only talent data platform that combines 3D data with AI, automating and consolidating top-of-funnel activities across the entire talent ecosystem. By bringing together sourcing, CRM, and analytics into one place, Findem enables individuals" entire career history to be instantly accessible in a single click, unlocking unique insights about the market and competition. With automated workflows powered by 3D data, Findem provides a competitive advantage in talent lifecycle management, delivering continuous pipelines of top, diverse candidates and enhancing overall talent experiences. Findem transforms the way companies plan, hire, and manage talent, ultimately improving organizational success. To learn more, visit www.findem.ai. We are seeking an experienced Big Data Engineer with 5-9 years of experience to join our team in Delhi, India (Hybrid- 3 days onsite). The ideal candidate will be responsible for building, deploying, and managing various data pipelines, data lake, and Big data processing solutions using Big data and ETL technologies. Responsibilities: - Build data pipelines, Big data processing solutions, and data lake infrastructure using various Big data and ETL technologies. - Assemble and process large, complex data sets from diverse sources like MongoDB, S3, Server-to-Server, Kafka, etc., using SQL and big data technologies. - Develop analytical tools to generate actionable insights into customer acquisition, operational efficiency, and other key business metrics. - Create interactive and ad-hoc query self-serve tools for analytics use cases. - Design data models and data schema for performance, scalability, and functional requirements. - Establish processes supporting data transformation, metadata, dependency, and workflow management. - Research, experiment, and prototype new tools/technologies to drive successful implementations. Skill Requirements: - Strong proficiency in Python/Scala. - Experience with Big data technologies such as Spark, Hadoop, Athena/Presto, Redshift, Kafka, etc. - Familiarity with various file formats like parquet, JSON, Avro, orc, etc. - Proficiency in workflow management tools like Airflow and experience with batch processing, streaming, and message queues. - Knowledge of visualization tools like Redash, Tableau, Kibana, etc. - Experience working with structured and unstructured data sets. - Strong problem-solving skills. Good to have: - Exposure to NoSQL databases like MongoDB. - Familiarity with Cloud platforms such as AWS, GCP, etc. - Understanding of Microservices architecture. - Knowledge of Machine learning techniques. This full-time role comes with full benefits and offers equal opportunity. Findem is headquartered in the San Francisco Bay Area globally, with our India headquarters in Bengaluru.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

Join Corporate Operation Group team as a Data Engineer - Group Treasury Transformation Liquidity Engineering team for Gurugram Office. You will be an integral part of a dynamic team of business and technology experts, collaborating with a robust network of technologists across our programme. Your role will involve working within a dedicated squad to ingest data from producers and implement essential liquidity calculations all within a cutting-edge data platform. We place a strong emphasis on delivering a high-performing, robust, and stable platform that meets the business needs of both internal and external stakeholders. At Macquarie, our advantage is bringing together diverse people and empowering them to shape all kinds of possibilities. We are a global financial services group operating in 31 markets and with 56 years of unbroken profitability. You'll be part of a friendly and supportive team where everyone - no matter what role - contributes ideas and drives outcomes. In this role, you will leverage in building, implementing, and enhancing enterprise-scale data platforms. You will bring an in-depth knowledge of big data technologies and a strong desire to work in a DevOps environment, where you will have end-to-end accountability for designing, developing, testing, deploying, and supporting your data assets. Additionally, you will create templates, implementation methods, and standards to ensure consistency and quality. Additionally, you will be managing deadlines, articulating technical challenges and solutions, and contributing to the development of improved processes and practices. A growth mindset, passion for learning, and ability to quickly adapt to innovative technologies will be essential to your success in this role. What You Offer: - Expertise in Big Data technologies, specifically Spark, Python, Hive, SQL, Presto (or other query engines), big data storage formats (e.g., Parquet), and orchestration tools (e.g., Apache Airflow). - Proficiency in developing configuration-based ETL pipelines and user-interface driven tools to optimize data processes and calculations (e.g., Dataiku). - Experience in solution design, including the design of data models, data pipelines, and calculations, as well as presenting solution options and recommendations. - Experience working in a cloud-based environment (ideally AWS), with a solid understanding of cloud computing concepts (EC2, S3), Linux, and containerization technologies (Docker and Kubernetes). - A background in solution delivery within the finance or treasury business domains, particularly in areas such as Liquidity or Capital, is advantageous. We love hearing from anyone inspired to build a better future with us. If you're excited about the role or working at Macquarie, we encourage you to apply. About Technology: Technology enables every aspect of Macquarie, for our people, our customers, and our communities. We're a global team that is passionate about accelerating the digital enterprise, connecting people and data, building platforms and applications, and designing tomorrow's technology solutions. Our commitment to diversity, equity, and inclusion: Our aim is to provide reasonable adjustments to individuals who may need support during the recruitment process and through working arrangements. If you require additional assistance, please let us know in the application process.,

Posted 2 weeks ago

Apply
Page 1 of 5
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies