Jobs
Interviews

70 Iceberg Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 13.0 years

0 Lacs

chennai, tamil nadu

On-site

Role Overview: You will work in the Information management team of Services Technologies, focusing on projects related to Big Data and public cloud adoption. This intermediate level position involves participating in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. Your primary objective will be to contribute to applications systems analysis and programming activities. Key Responsibilities: - Conduct tasks related to feasibility studies, time and cost estimates, IT planning, risk technology, applications development, model development, and establish and implement new or revised applications systems and programs to meet specific business needs or user areas. - Monitor and control all phases of the development process, including analysis, design, construction, testing, and implementation. Provide user and operational support on applications to business users. - Utilize in-depth specialty knowledge of applications development to analyze complex problems/issues, evaluate business processes, system processes, and industry standards, and make evaluative judgments. - Recommend and develop security measures in post-implementation analysis of business usage to ensure successful system design and functionality. - Consult with users/clients and other technology groups on issues, recommend advanced programming solutions, and install and assist customer exposure systems. - Ensure essential procedures are followed, help define operating standards and processes, serve as an advisor or coach to new or lower-level analysts. - Operate with a limited level of direct supervision, exercise independence of judgment and autonomy, and act as an SME to senior stakeholders and/or other team members. Qualifications: - 8-13 years of work experience with Big Data technologies, such as Spark (Scala/Python), Kafka streaming, Hadoop, HDFS, and solid understanding of Big Data architecture. - Strong exposure to SQL, hands-on experience on Web Api. - Good understanding of data file formats, Impala, Hadoop, Parquet, Avro, Iceberg, etc. - Experience with web services using Kubernetes, and Version control/CI/CD processes with git, Jenkins, harness, etc. - Public cloud experience is preferred, preferably AWS. - Strong data analysis skills and the ability to manipulate data for business reporting. - Experience working in an agile environment with fast-paced changing requirements. - Excellent planning and organizational skills, strong communication skills. - Experience in systems analysis and programming of software applications, managing and implementing successful projects. - Working knowledge of consulting/project management techniques/methods, ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements. Education: - Bachelors degree/University degree or equivalent experience. (Note: This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required.),

Posted 23 hours ago

Apply

10.0 - 14.0 years

0 Lacs

hyderabad, telangana

On-site

As the Vice President of Engineering at Teradata, you will lead the India-based software development organization for the AI Platform Group. Your primary responsibility will be to execute the product roadmap for key technologies such as Vector Store, Agent platform, Apps, user experience, and AI/ML-driven use-cases at scale. Success in this role will involve building a world-class engineering culture, attracting and retaining top technical talent, accelerating product delivery, and driving innovation to bring measurable value to customers. Key Responsibilities: - Lead a team of 150+ engineers to help customers achieve outcomes with Data and AI - Partner closely with Product Management, Product Operations, Security, Customer Success, and Executive Leadership - Implement and scale Agile and DevSecOps methodologies - Drive the development of agentic AI and AI at scale in a hybrid cloud environment - Modernize legacy architectures into service-based systems using CI/CD and automation Qualifications Required: - 10+ years of senior leadership experience in product development, engineering, or technology within enterprise software product companies - 3+ years in a VP Product or equivalent role managing large-scale technical teams in a growth market - Experience with cloud platforms, Kubernetes, containerization, and microservices-based architectures - Knowledge of data harmonization, data analytics for AI, and modern data stack technologies - Strong background in enterprise security, data governance, and API-first design - Masters degree in engineering, Computer Science, or MBA preferred Teradata is a company that believes in empowering people with better information through their cloud analytics and data platform for AI. They aim to uplift and empower customers to make better decisions by providing harmonized data, trusted AI, and faster innovation. Trusted by the world's top companies, Teradata helps improve business performance, enrich customer experiences, and integrate data across the enterprise.,

Posted 4 days ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a candidate for the role in the Unified Intelligence Platform (UIP) team, you will be part of a mission to enable Salesforce teams to deeply understand and optimize their services and operations through data. The UIP is a modern, cloud-based data platform built with cutting-edge technologies like Spark, Trino, Airflow, DBT, Jupyter Notebooks, and more. Here are the details of the job responsibilities and qualifications we are looking for: **Role Overview:** You will be responsible for leading the architecture, design, development, and support of mission-critical data and platform services. Your role will involve driving self-service data pipelines, collaborating with product management teams, and architecting robust data solutions that enhance ingestion, processing, and quality. Additionally, you will be involved in promoting a service ownership model, developing data frameworks, implementing data quality services, building Salesforce-integrated applications, establishing CI/CD processes, and maintaining key components of the UIP technology stack. **Key Responsibilities:** - Lead the architecture, design, development, and support of mission-critical data and platform services - Drive self-service, metadata-driven data pipelines, services, and applications - Collaborate with product management and client teams to deliver scalable solutions - Architect robust data solutions with security and governance - Promote a service ownership model with telemetry and control mechanisms - Develop data frameworks and implement data quality services - Build Salesforce-integrated applications for data lifecycle management - Establish and refine CI/CD processes for seamless deployment - Oversee and maintain components of the UIP technology stack - Collaborate with third-party vendors for issue resolution - Architect data pipelines optimized for multi-cloud environments **Qualifications Required:** - Passionate about tackling big data challenges in distributed systems - Highly collaborative and adaptable, with a strong foundation in software engineering - Committed to engineering excellence and fostering transparency - Embraces a growth mindset and actively engages in support channels - Champions a Service Ownership model and minimizes operational overhead through automation - Experience with advanced data lake engines like Spark and Trino is a plus This is an opportunity to be part of a fast-paced, agile, and highly collaborative team that is defining the next generation of trusted enterprise computing. If you are passionate about working with cutting-edge technologies and solving complex data challenges, this role might be the perfect fit for you.,

Posted 4 days ago

Apply

0.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Job Category Software Engineering Job Details About Salesforce Salesforce is the #1 AI CRM, where humans with agents drive customer success together. Here, ambition meets action. Tech meets trust. And innovation isn't a buzzword - it's a way of life. The world of work as we know it is changing and we're looking for Trailblazers who are passionate about bettering business and the world through AI, driving innovation, and keeping Salesforce's core values at the heart of it all. Ready to level-up your career at the company leading workforce transformation in the agentic era You're in the right place! Agentforce is the future of AI, and you are the future of Salesforce. About the Role The mission of is to enable Salesforce teams to deeply understand and optimize their services, operations through data. UIP is a modern, trusted, turn-key data platform built with cutting-edge technologies with an exceptional user experience. Massive amounts of data are generated each day at Salesforce. It is critical to process and store large volumes of data efficiently and enable the users to discover and analyze the data easily. UIP is a modern, cloud-based data platform built on advanced data lake engines like Spark and Trino, incorporating a diverse suite of tools and technologies-including Airflow, DBT, Jupyter Notebooks, Sagemaker, Iceberg, and Open Metadata-for efficient data processing, storage, querying, and management. With curated datasets, we empower machine learning and AI use cases, enabling both model development and inference. Our team is fast-paced, agile, and highly collaborative, working across all areas of our tech stack to provide critical business services, support complex computing requirements, drive big data analytics, and pioneer cutting-edge engineering solutions in the cloud, defining the next generation of trusted enterprise computing. Who are we looking for Passionate about tackling big data challenges in distributed systems. Highly collaborative, working across teams to ensure customer success. Drive end-to-end projects that deliver high-performance, scalable, and maintainable solutions. Adaptable and versatile, taking on multiple roles as needed, whether as a Platform Engineer, Data engineer, Backend engineer, DevOps engineer, or support engineer. for the platform and customer success Strong foundation in software engineering, with the flexibility to work in any programming language. Committed to engineering excellence, consistently delivering high-quality products. Open and respectful communicator, fostering transparency and team alignment. Embraces a growth mindset, continuously learning and seeking self-improvement. Engages actively in support channels, providing insights and collaborating to support the community. Champions a Service Ownership model, minimizing operational overhead through automation, monitoring, and alerting best practices. Job Responsibilities: Lead the architecture, design, development, and support of mission-critical data and platform services, ensuring full ownership and accountability. Drive multiple self-service, metadata-driven data pipelines, services, and applications to streamline ingestion from diverse data sources into a multi-cloud, petabyte-scale data platform. Collaborate closely with product management and client teams to capture requirements and deliver scalable, adaptable solutions that drive success. Architect robust data solutions that enhance ingestion, processing, quality, and discovery, embedding security and governance from the start. Promote a service ownership model, designing solutions with extensive telemetry and control mechanisms to streamline governance and operational management. Develop data frameworks to simplify recurring data tasks, ensure best practices, foster consistency, and facilitate tool migration. Implement advanced data quality services seamlessly within the platform, empowering data analysts, engineers, and stewards to continuously monitor and uphold data standards. Build Salesforce-integrated applications to monitor and manage the full data lifecycle from a unified interface. Establish and refine CI/CD processes for seamless deployment of platform services across cloud environments. Oversee and maintain key components of the UIP technology stack, including Airflow, Spark, Trino, Iceberg, and Kubernetes. Collaborate with third-party vendors to troubleshoot and resolve platform-related software issues. Architect and orchestrate data pipelines and platform services optimized for multi-cloud environments (e.g., AWS, GCP). Unleash Your Potential When you join Salesforce, you'll be limitless in all areas of your life. Our benefits and resources support you to find balance and , and our AI agents accelerate your impact so you can . Together, we'll bring the power of Agentforce to organizations of all sizes and deliver amazing experiences that customers love. Apply today to not only shape the future - but to redefine what's possible - for yourself, for AI, and the world. Accommodations If you require assistance due to a disability applying for open positions please submit a request via this . Posting Statement Salesforce is an equal opportunity employer and maintains a policy of non-discrimination with all employees and applicants for employment. What does that mean exactly It means that at Salesforce, we believe in equality for all. And we believe we can lead the path to equality in part by creating a workplace that's inclusive, and free from discrimination. Any employee or potential employee will be assessed on the basis of merit, competence and qualifications - without regard to race, religion, color, national origin, sex, sexual orientation, gender expression or identity, transgender status, age, disability, veteran or marital status, political viewpoint, or other classifications protected by law. This policy applies to current and prospective employees, no matter where they are in their Salesforce employment journey. It also applies to recruiting, hiring, job assignment, compensation, promotion, benefits, training, assessment of job performance, discipline, termination, and everything in between. Recruiting, hiring, and promotion decisions at Salesforce are fair and based on merit. The same goes for compensation, benefits, promotions, transfers, reduction in workforce, recall, training, and education.

Posted 4 days ago

Apply

6.0 - 10.0 years

20 - 35 Lacs

bengaluru

Hybrid

Key Skills - Hadoop, PySpark, Python, SQL, Apache Iceberg (niche) , Hive Requirements Solid knowledge of Hadoop ecosystem such as Hive, Iceberg, Spark SQL and Proficiency in Python, Unix ,SQL Experience with Apache Kafka, Apache Flink, and other relevant streaming technologies. Strong hands-on experience with Hive and SQL for querying and data transformation Proficiency in Python for data manipulation and automation Expertise in Apache Spark (batch and streaming) Deep understanding of Hadoop ecosystem (HDFS, YARN, MapReduce) Experience working with Kafka for streaming data pipelines Experience with workflow orchestration tools (Airflow, Oozie, etc.) Knowledge of cloud-based big data platforms (AWS EMR, GCP Dataproc, Azure HDInsight) Fam iliarity with CI/CD pipelines and version control (Git) Responsibilities Design, develop, and maintain scalable data pipelines and ETL processes. Work with large datasets using Hadoop ecosystem tools (Hive, Spark). Build and optimize real-time and batch data processing solutions using Kafka and Spark Streaming. Write efficient, high-performance SQL queries to extract, transform, and load data. Develop reusable data frameworks and utilities in Python. Collaborate with data scientists, analysts, and product teams to deliver reliable data solutions. Monitor, troubleshoot, and optimize big data workflows for performance and cost efficiency.

Posted 5 days ago

Apply

5.0 - 9.0 years

0 Lacs

bangalore, karnataka

On-site

As a Principal Data Engineer at FTSE Russell, you will play a crucial role in leading the development of foundational components for a lakehouse architecture on AWS. Your primary responsibility will involve driving the migration of existing data processing workflows to the new lakehouse solution. Collaboration across the Data Engineering organization is essential as you design and implement scalable data infrastructure and processes using cutting-edge technologies such as Python, PySpark, EMR Serverless, Iceberg, Glue, and Glue Data Catalog. Success in this role hinges on your deep technical expertise, exceptional problem-solving skills, and your ability to lead and mentor within an agile team. You will lead complex projects autonomously, setting a high standard for technical contributions while fostering an inclusive and open culture within development teams. Your strategic guidance on best practices in design, development, and implementation will ensure that solutions meet business requirements and technical standards. In addition to project leadership and culture building, you will be responsible for data development and tool advancement. This includes writing high-quality, efficient code, developing necessary tools and applications, and leading the development of innovative tools and frameworks to enhance data engineering capabilities. Your role will also involve solution decomposition and design leadership, working closely with architects, Product Owners, and Dev team members to decompose solutions into Epics while establishing and enforcing best practices for coding standards, design patterns, and system architecture. Stakeholder relationship building and communication are crucial aspects of this role, as you build and maintain strong relationships with internal and external stakeholders, serving as an internal subject matter expert in software development. To excel as a Principal Data Engineer, you should possess a Bachelor's degree in computer science, Software Engineering, or a related field. A master's degree or relevant certifications such as AWS Certified Solutions Architect or Certified Data Analytics would be advantageous. Proficiency in advanced programming, system architecture, and solution design, along with key skills in software development practices, Python, Spark, automation, CI/CD pipelines, cross-functional collaboration, technical leadership, and AWS Cloud Services are essential for success in this role. At FTSE Russell, we champion a culture committed to continuous learning, mentoring, and career growth opportunities while fostering a culture of inclusion for all employees. Join us in driving financial stability, empowering economies, and creating sustainable growth as part of our dynamic and diverse team. Your individuality, ideas, and commitment to sustainability will be valued as we work together to make a positive impact globally.,

Posted 1 week ago

Apply

2.0 - 4.0 years

0 Lacs

hyderabad, telangana, india

On-site

About this role: Wells Fargo is seeking a Software engineer (Data engineer). The Finance Technology Team within Enterprise Functions Technology (EFT) is seeking a Data Engineer to join our CALM (Corporate Asset and Liability Management) team at Wells Fargo Corporate Treasury. As a Software Engineer in the CALM Data Engineering team, candidate will play a pivotal role supporting the design, development, and maintenance of our metadata-driven data engineering frameworks. Candidate will work independently to deliver critical project tasks, focusing on building, enhancing, and troubleshooting robust data pipelines, APIs and wrapper capabilities for the CALM project. This role is essential for ensuring best practices and strong validations during Data Center exit migrations and DPC onboarding. Candidate will collaborate with cross-functional teams to drive the implementation of scalable, high-performance data solutions using Python, SQL, Apache Spark, Iceberg, Dremio and Autosys. Enterprise Finance & Technology is a collaborative, cross-functional, Agile organization that is looking for independent thinkers willing to drive innovative solutions for data delivery and data management. In this role, you will: Participate in low to moderately complex initiatives and projects associated with the technology domain, including installation, upgrades, and deployment efforts Identify opportunities for service quality and availability improvements within the technology domain environment Design, code, test, debug, and document for low to moderately complex projects and programs associated with technology domain, including upgrades and deployments Review and analyze technical assignments or challenges that are related to low to medium risk deliverables and that require research, evaluation, and selection of alternative technology domains Present recommendations for resolving issues or may escalate issues as needed to meet established service level agreements Exercise some independent judgment while also developing understanding of given technology domain in reference to security and compliance requirements Provide information to technology colleagues, internal partners, and stakeholders Required Qualifications: 2+ years of Software Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: At least 1+ years of experience working with any RDBMS At least 2+ years of experience in building big data pipelines. At least 2+ years of experience working with Apache Spark, Hive and Hadoop Strong experience with programming in Python, SQL and good understanding of bash scripting for data processing and automation Hands-on experience with Apache Spark for large-scale data processing. Experience with Autosys or similar job scheduling/orchestration tools. Experience in working with CI/CD pipelines, improving code coverage and vulnerability remediation 2+ years of experience in Agile mode of working Decent understanding on workings of Rest APIs, Dremio and Objectstore Proven ability to independently deliver complex project tasks and solutions. Solid understanding of data engineering best practices, including validation and quality assurance. Excellent troubleshooting and problem-solving skills. Proficiency in data modeling and database design Experience in working with Open table formats like (Iceberg, Delta) Knowledge of data governance, security, and compliance requirements. Experience with GenAI use cases. Experience with cloud data platforms (e.g., AWS, Azure, GCP). Exposure to financial services or asset and liability management domains. Job Expectations: This role is essential for ensuring best practices and strong validations during Data Center exit migrations and DPC onboarding. Candidate will collaborate with cross-functional teams to drive the implementation of scalable, high-performance data solutions using Python, SQL, Apache Spark, Iceberg, Dremio and Autosys. Posting End Date: 13 Sep 2025 We Value Equal Opportunity Wells Fargo is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other legally protected characteristic. Employees support our focus on building strong customer relationships balanced with a strong risk mitigating and compliance-driven culture which firmly establishes those disciplines as critical to the success of our customers and company. They are accountable for execution of all applicable risk programs (Credit, Market, Financial Crimes, Operational, Regulatory Compliance), which includes effectively following and adhering to applicable Wells Fargo policies and procedures, appropriately fulfilling risk and compliance obligations, timely and effective escalation and remediation of issues, and making sound risk decisions. There is emphasis on proactive monitoring, governance, risk identification and escalation, as well as making sound risk decisions commensurate with the business unit's risk appetite and all risk and compliance program requirements. Candidates applying to job openings posted in Canada: Applications for employment are encouraged from all qualified candidates, including women, persons with disabilities, aboriginal peoples and visible minorities. Accommodation for applicants with disabilities is available upon request in connection with the recruitment process. Applicants with Disabilities To request a medical accommodation during the application or interview process, visit . Drug and Alcohol Policy Wells Fargo maintains a drug free workplace. Please see our to learn more. Wells Fargo Recruitment and Hiring Requirements: a. Third-Party recordings are prohibited unless authorized by Wells Fargo. b. Wells Fargo requires you to directly represent your own experiences during the recruiting and hiring process.

Posted 1 week ago

Apply

6.0 - 11.0 years

25 - 37 Lacs

bengaluru

Work from Office

Skills Required: Familiarity with data processing engines such as Apache Spark, Flink, or other big data tools. Design, develop, and implement robust data lake architectures on cloud platforms (AWS/Azure). Implement streaming and batch data pipelines using Apache Hudi, Apache Hive, and cloud-native services like AWS Glue, Azure Data Lake, etc. Architect and optimize ingestion, compaction, partitioning, and indexing strategies in Apache Hudi. Develop scalable data transformation and ETL frameworks using Python, Spark, and Flink. Work closely with DataOps/DevOps to build CI/CD pipelines and monitoring tools for data lake platforms. Ensure data governance, schema evolution handling, lineage tracking, and compliance. Sound knowledge of Hive, Parquet/ORC formats, and DeltaLake vs Hudi vs Iceberg Strong understanding of schema evolution, data versioning, and ACID guarantees in data lakes Collaborate with analytics and BI teams to deliver clean, reliable, and timely datasets. Troubleshoot performance bottlenecks in big data processing workloads and pipelines. Experience with data governance tools and practices, including data cataloging, data lineage, and metadata management. Strong understanding of data integration and movement between different storage systems (databases, data lakes, data warehouses). Strong understanding of API integration for data ingestion, including RESTful services and streaming data. Experience in data migration strategies, tools, and frameworks for moving data from legacy systems (on-premises) to cloud-based solutions. Proficiency with data warehousing solutions (e.g., Google BigQuery, Snowflake). Expertise in data modeling tools and techniques (e.g., SAP Datasphere, EA Sparx). Strong knowledge of SQL and NoSQL databases (e.g., MongoDB, Cassandra). Familiarity with cloud platforms (e.g., AWS, Azure, Google Cloud). Nice To Have Experience with Apache Iceberg, Delta Lake Familiarity with Kinesis, Kafka, or any streaming platform Exposure to dbt, Airflow, or Dagster Experience in data cataloging, data governance tools, and column-level lineage tracking

Posted 1 week ago

Apply

12.0 - 19.0 years

25 - 35 Lacs

pune, bengaluru

Work from Office

Location : Bangalore Experience : 12-18 years No of position : 1 Contract duration : 6 months – 1 year C2H

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

maharashtra

On-site

Unlock your potential with Dassault Systmes, a global leader in Scientific Software Engineering as a Big Data Engineer in Pune, Maharashtra! Role Description & Responsibilities: - Data Pipeline Development: Design, develop, and maintain robust ETL pipelines for batch and real-time data ingestion, processing, and transformation using Spark, Kafka, and Python. - Data Architecture: Build and optimize scalable data architectures, including data lakes, data marts, and data warehouses, to support business intelligence, reporting, and machine learning. - Data Governance: Ensure data reliability, integrity, and governance by enabling accurate, consistent, and trustworthy data for decision-making. - Collaboration: Work closely with data analysts, data scientists, and business stakeholders to gather requirements, identify inefficiencies, and deliver scalable and impactful data solutions. - Optimization: Develop efficient workflows to handle large-scale datasets, improving performance and minimizing downtime. - Documentation: Create detailed documentation for data processes, pipelines, and architecture to support seamless collaboration and knowledge sharing. - Innovation: Contribute to a thriving data engineering culture by introducing new tools, frameworks, and best practices to improve data processes across the organization. Qualifications: - Educational Background: Bachelor's degree in Computer Science, Engineering, or a related field. - Professional Experience: 2-3 years of experience in data engineering, with expertise in designing and managing complex ETL pipelines. Technical Skills: - Proficiency in Python, PySpark, and Spark SQL for distributed and real-time data processing. - Deep understanding of real-time streaming systems using Kafka. - Experience with data lake and data warehousing technologies (Hadoop, HDFS, Hive, Iceberg, Apache Spark). - Strong knowledge of relational and non-relational databases (SQL, NoSQL). - Experience in cloud and on-premises environments for building and managing data pipelines. - Experience with ETL tools like SAP BODS or similar platforms. - Knowledge of reporting tools like SAP BO for designing dashboards and reports. - Hands-on experience building end-to-end data frameworks and working with data lakes. Analytical and Problem-Solving Skills: Ability to translate complex business requirements into scalable and efficient technical solutions. Collaboration and Communication: Strong communication skills and the ability to work with cross-functional teams, including analysts, scientists, and stakeholders. Location: Willingness to work from Pune (on-site). What is in it for you - Work for one of the biggest software companies. - Work in a culture of collaboration and innovation. - Opportunities for personal development and career progression. - Chance to collaborate with various internal users of Dassault Systmes and also stakeholders of various internal and partner projects. Inclusion Statement: As a game-changer in sustainable technology and innovation, Dassault Systmes is striving to build more inclusive and diverse teams across the globe. We believe that our people are our number one asset and we want all employees to feel empowered to bring their whole selves to work every day. It is our goal that our people feel a sense of pride and a passion for belonging. As a company leading change, it's our responsibility to foster opportunities for all people to participate in a harmonized Workforce of the Future.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

You will be responsible for leading the development of foundational components for a lakehouse architecture on AWS and overseeing the migration of existing data processing workflows to the new lakehouse solution. Your role will involve collaborating with the Data Engineering team to design and implement scalable data infrastructure and processes using technologies such as Python, PySpark, EMR Serverless, Iceberg, Glue, and Glue Data Catalog. The primary objective of this position is to ensure a successful migration and establish robust data quality governance across the new platform to enable reliable and efficient data processing. To excel in this position, you will need to demonstrate deep technical expertise, exceptional problem-solving skills, and the ability to lead and mentor within an agile team. Your key responsibilities will include leading complex projects independently, fostering an inclusive and open culture within development teams, and setting high standards for technical contributions. You will provide strategic guidance on best practices in design, development, and implementation to ensure that solutions meet business requirements and technical standards. Additionally, you will be involved in writing high-quality, efficient code, developing tools and applications to address complex business needs, and leading the development of innovative tools and frameworks to enhance data engineering capabilities. Collaboration with architects, Product Owners, and Dev team members will be essential to decompose solutions into Epics, lead the design and planning of these components, and drive the migration of existing data processing workflows to the Lakehouse architecture leveraging Iceberg capabilities. You will establish and enforce best practices for coding standards, design patterns, and system architecture, utilizing existing design patterns to develop reliable solutions while also recognizing when to adapt or avoid patterns to prevent anti-patterns. In terms of qualifications and experience, a Bachelor's degree in computer science, Software Engineering, or a related field is essential. A master's degree or relevant certifications such as AWS Certified Solutions Architect or Certified Data Analytics is advantageous. Proficiency in advanced programming, system architecture, and solution design is required. You should possess key skills in advanced software development practices, Python and Spark expertise, automation and CI/CD pipelines, cross-functional collaboration and communication, technical leadership and mentorship, domain expertise in AWS Cloud Services, and quality assurance and continuous improvement practices. Your role would involve working within a culture committed to continuous learning, mentoring, career growth opportunities, and inclusion for all employees. You will be part of a collaborative and creative environment where new ideas are encouraged, and sustainability is a key focus. The values of Integrity, Partnership, Excellence, and Change underpin the organization's purpose of driving financial stability, empowering economies, and enabling sustainable growth. LSEG offers various benefits, including healthcare, retirement planning, paid volunteering days, and wellbeing initiatives.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

chennai, tamil nadu

On-site

Join us as a Data Engineer responsible for supporting the successful delivery of Location Strategy projects to plan, budget, agreed quality, and governance standards. You'll spearhead the evolution of our digital landscape, driving innovation and excellence. You will harness cutting-edge technology to revolutionize our digital offerings, ensuring unparalleled customer experiences. To be successful as a Data Engineer, you should have experience with: - Hands-on experience in PySpark and strong knowledge of Dataframes, RDD, and SparkSQL. - Hands-on experience in developing, testing, and maintaining applications on AWS Cloud. - Strong hold on AWS Data Analytics Technology Stack (Glue, S3, Lambda, Lake Formation, Athena). - Design and implement scalable and efficient data transformation/storage solutions using Snowflake. - Experience in data ingestion to Snowflake for different storage formats such as Parquet, Iceberg, JSON, CSV, etc. - Experience in using DBT (Data Build Tool) with Snowflake for ELT pipeline development. - Experience in writing advanced SQL and PL SQL programs. - Hands-On Experience for building reusable components using Snowflake and AWS Tools/Technology. - Should have worked on at least two major project implementations. - Exposure to data governance or lineage tools such as Immuta and Alation is an added advantage. - Experience in using Orchestration tools such as Apache Airflow or Snowflake Tasks is an added advantage. - Knowledge of Abinitio ETL tool is a plus. Some other highly valued skills may include: - Ability to engage with stakeholders, elicit requirements/user stories, and translate requirements into ETL components. - Ability to understand the infrastructure setup and provide solutions either individually or working with teams. - Good knowledge of Data Marts and Data Warehousing concepts. - Possess good analytical and interpersonal skills. - Implement Cloud-based Enterprise data warehouse with multiple data platforms along with Snowflake and NoSQL environment to build data movement strategy. You may be assessed on key critical skills relevant for success in the role, such as risk and controls, change and transformation, business acumen, strategic thinking, digital and technology, as well as job-specific technical skills. The role is based out of Chennai. Purpose of the role: To build and maintain the systems that collect, store, process, and analyze data, such as data pipelines, data warehouses, and data lakes to ensure that all data is accurate, accessible, and secure. Accountabilities: - Build and maintenance of data architectures pipelines that enable the transfer and processing of durable, complete, and consistent data. - Design and implementation of data warehouses and data lakes that manage the appropriate data volumes and velocity and adhere to the required security measures. - Development of processing and analysis algorithms fit for the intended data complexity and volumes. - Collaboration with data scientists to build and deploy machine learning models. Analyst Expectations: To perform prescribed activities in a timely manner and to a high standard consistently driving continuous improvement. Requires in-depth technical knowledge and experience in their assigned area of expertise. Thorough understanding of the underlying principles and concepts within the area of expertise. They lead and supervise a team, guiding and supporting professional development, allocating work requirements, and coordinating team resources. If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviors to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviors are: L Listen and be authentic, E Energize and inspire, A Align across the enterprise, D Develop others. OR for an individual contributor, they develop technical expertise in the work area, acting as an advisor where appropriate. Will have an impact on the work of related teams within the area. Partner with other functions and business areas. Takes responsibility for the end results of a team's operational processing and activities. Escalate breaches of policies/procedures appropriately. Take responsibility for embedding new policies/procedures adopted due to risk mitigation. Advise and influence decision-making within the own area of expertise. Take ownership of managing risk and strengthening controls in relation to the work you own or contribute to. Deliver your work and areas of responsibility following relevant rules, regulations, and codes of conduct. Maintain and continually build an understanding of how your sub-function integrates with the function, alongside knowledge of the organization's products, services, and processes within the function. Demonstrate understanding of how areas coordinate and contribute to the achievement of the objectives of the organization sub-function. Resolve problems by identifying and selecting solutions through the application of acquired technical experience and guided by precedents. Guide and persuade team members and communicate complex/sensitive information. Act as a contact point for stakeholders outside of the immediate function, while building a network of contacts outside the team and external to the organization. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset to Empower, Challenge, and Drive the operating manual for how we behave.,

Posted 1 week ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

chennai

Hybrid

Senior Data Warehouse Engineer Pando (www.pando.ai) is pioneering the future of autonomous logistics with innovative AI capabilities. Trusted by Fortune 500 enterprises with global customers across North America, Europe, and Asia Pacific regions, we are leading the global disruption of supply chain software, with our AI-powered, no-code, & unified platform empowering Autonomous Supply Chain. We have been recognized by Gartner for our transportation management capabilities, by the World Economic Forum (WEF) as a Technology Pioneer, by G2 as a Market Leader in Freight Management, and named one of the fastest-growing technology companies by Deloitte. Why Pando? We are one of the fastest growing companies reimagining supply chain and logistics for Manufacturers & Retailers scaling up globally. We are a growing team, unrelenting and enthusiastic about building great products. We have folks who are pragmatic, imaginative or a quirky combination of both. We yearn for purpose in our work & support. Each other to grow. We work extremely hard with people we respect and admire, and we play to win. As a Senior AI and Data Warehouse Engineer at Pando, you will be responsible for building and scaling the data and AI services team. You will drive the design and implementation of highly scalable, modular, and reusable data pipelines, leveraging big data technologies and low-code implementations. This is a senior leadership position where you will work closely with cross-functional teams to deliver solutions that power advanced analytics, dashboards, and AI-based insights. Key Responsibilities Lead the development of scalable, high-performance data pipelines using PySpark or Big Data ETL pipeline technologies. Drive data modeling efforts for analytics, dashboards, and knowledge graphs. Oversee the implementation of parquet-based data lakes. Work on OLAP databases, ensuring optimal data structure for reporting and querying. Architect and optimize large-scale enterprise big data implementations with a focus on modular and reusable low-code libraries. Collaborate with stakeholders to design and deliver AI and DWH solutions that align with business needs. Mentor and lead a team of engineers, building out the data and AI services organization. Requirements 4 to 6 years of experience in big data and AI technologies, with expertise in PySpark or similar Big Data ETL pipeline technologies. Strong proficiency in SQL and OLAP database technologies. Firsthand experience with data modeling for analytics, dashboards, and knowledge graphs. Proven experience with parquet-based data lake implementations. Expertise in building highly scalable, high-volume data pipelines. Experience with modular, reusable, low-code-based implementations. Involvement in large-scale enterprise big data implementations. Initiative-taker with strong motivation and the ability to lead a growing team. Preferred Experience leading a team or building out a new department. Experience with cloud-based data platforms and AI services. Familiarity with supply chain technology or fulfilment platforms is a plus.

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

maharashtra

On-site

As a Lead Software Engineer III at JPMorgan Chase within the AI/ML & Data Platforms team, you play a vital role in an agile team dedicated to enhancing, building, and delivering cutting-edge technology products with trust and reliability. Your responsibilities encompass executing innovative software solutions, designing and developing high-quality code, and troubleshooting technical issues with a forward-thinking mindset. Your expertise contributes to improving operational stability and automating remediation processes, ensuring the scalability and security of software applications. You will lead evaluation sessions with external vendors and internal teams, fostering discussions on architectural designs and technological applicability. As a core member of the Software Engineering community, you will champion the adoption of new technologies and drive a culture of diversity, equity, inclusion, and respect. **Job Responsibilities:** - Execute creative software solutions, design, development, and technical troubleshooting with a focus on innovative approaches. - Develop secure high-quality production code, review and debug code, and identify opportunities for automation to enhance operational stability. - Lead evaluation sessions with external vendors and internal teams to assess architectural designs and technical implementations. - Drive awareness and adoption of new technologies within the Software Engineering community. - Contribute to a culture of diversity, equity, inclusion, and respect. **Required Qualifications, Capabilities, and Skills:** - Formal training or certification in software engineering concepts with 3+ years of applied experience. - Hands-on experience in system design, application development, testing, and operational stability. - Proficiency in automated API and UI testing using technologies like RestAssured and Selenium. - Advanced knowledge of programming languages such as Java, Selenium, and Cucumber. - Experience with performance testing tools like JMeter and Blazemeter. - Proficiency in automation, continuous delivery, and the Software Development Life Cycle. - Understanding of agile methodologies, CI/CD, application resiliency, security, and technical processes in cloud, AI, and machine learning disciplines. **Preferred Qualifications, Capabilities, and Skills:** - Independently design, develop, test, and deliver Test Automation Solutions supporting UAT and UAT testing using Java, Selenium, and Cucumber. - Familiarity with REST and SOAP web services, API Testing/Automation. - Knowledge of databases like Trino, Iceberg, Snowflake, and Postgres. - Collaborative nature, ability to build strong relationships, strategize process improvements, and results-oriented mindset.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

You are a highly skilled Data Modeler with expertise in Iceberg and Snowflake, responsible for designing and optimizing data models for scalable and efficient data architectures. Working closely with cross-functional teams, you ensure data integrity, consistency, and performance across platforms. Your key responsibilities include designing and implementing robust data models tailored to meet business and technical requirements. Leveraging Starburst, Iceberg, and Snowflake, you build scalable and high-performance data architectures. You optimize query performance and ensure efficient data storage strategies. Collaboration with data engineering and BI teams is essential to define data requirements and align them with business objectives. Additionally, you conduct data profiling, analysis, and quality assessments to maintain data accuracy and reliability. Documenting and maintaining data lineage and governance processes is also part of your role. Keeping updated on emerging technologies and industry best practices for data modeling and analytics is crucial. Qualifications: - Bachelor's or master's degree in computer science, Data Science, or a related field. - 5+ years of experience in data modeling, data architecture, and database design. - Hands-on expertise with Starburst, Iceberg, and Snowflake platforms. - Strong SQL skills and experience with ETL/ELT workflows. - Familiarity with data lakehouse architecture and modern data stack principles. - Knowledge of data governance, security, and compliance practices. - Excellent problem-solving and communication skills. Preferred Skills: - Experience with other BI and analytics tools like Tableau, Qlik Sense, or Power BI. - Knowledge of cloud platforms like AWS, Azure, or GCP. - Knowledge of Hadoop. - Familiarity with data virtualization and federation tools.,

Posted 1 week ago

Apply

7.0 - 11.0 years

0 Lacs

karnataka

On-site

We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Software Engineer III at JPMorgan Chase within the Commercial & Investment Bank, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm's business objectives. You will execute software solutions, design, development, and technical troubleshooting with the ability to think beyond routine or conventional approaches to build solutions or break down technical problems. Creating secure and high-quality production code and maintaining algorithms that run synchronously with appropriate systems will be part of your tasks. You will produce architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development. Additionally, you will gather, analyze, synthesize, and develop visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems. Proactively identifying hidden problems and patterns in data and using these insights to drive improvements to coding hygiene and system architecture will be crucial. You will also contribute to software engineering communities of practice and events that explore new and emerging technologies, adding to the team culture of diversity, equity, inclusion, and respect. Required qualifications, capabilities, and skills include being strong in AWS Services like Redshift, Glue, S3, Terraform for infrastructure setup, and Python and ETL development. Formal training or certification on software engineering concepts is preferred, along with hands-on practical experience in system design, application development, testing, and operational stability. Proficiency in coding in one or more languages, 7+ years of experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages, and overall knowledge of the Software Development Life Cycle are essential. A solid understanding of agile methodologies such as CI/CD, Applicant Resiliency, and Security is required, along with demonstrated knowledge of software applications and technical processes within a technical discipline (e.g., cloud, artificial intelligence, machine learning, mobile, etc.). Preferred qualifications, capabilities, and skills include familiarity with modern front-end technologies and exposure to cloud technologies.,

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

pune, maharashtra

On-site

As a Senior AWS Data Engineer Vice President at Barclays, you will play a pivotal role in shaping the future by spearheading the evolution of the Reference Data Services function. You will work collaboratively with a team of engineers to oversee the engineering process from strategy and design to build, documentation, and testing of software components. Your responsibilities will extend to working closely with colleagues in the cloud and middleware organization to bring products to life in a modern developer-focused environment. Effective stakeholder management, leadership, and decision-making skills are essential to support business strategy and risk management. To excel in this role, you should possess experience in AWS cloud services like S3, Glue, Athena, Lake Formation, and CloudFormation. Proficiency in Python at a senior level for data engineering and automation purposes is crucial. Additionally, familiarity with ETL frameworks, data transformation, and data quality tools will be beneficial. Highly valued skills may include an AWS Data Engineer certification, prior experience in the banking or financial services domain, expertise in IAM and Permissions management in AWS cloud, and proficiency with tools such as Databricks, Snowflake, Starburst, and Iceberg. Your primary goal will be to build and maintain systems that collect, store, process, and analyze data effectively, ensuring accuracy, accessibility, and security. This involves developing data architectures pipelines, designing and implementing data warehouses and data lakes, and creating processing and analysis algorithms tailored to the data complexity and volumes. Collaboration with data scientists to build and deploy machine learning models will also be part of your responsibilities. As a Vice President, you are expected to contribute to setting strategy, driving requirements, and making recommendations for change. Managing resources, budgets, and policies, delivering continuous improvements, and addressing policy breaches are key aspects of this role. If you have leadership responsibilities, you are required to demonstrate leadership behaviors that create an environment for colleagues to thrive and deliver excellent results consistently. For individual contributors, being a subject matter expert within your discipline, guiding technical direction, leading collaborative assignments, and mentoring less experienced specialists are essential. Your role will be based in Pune, India, and you will collaborate with key stakeholders, functional leadership teams, and senior management to provide insights on functional and cross-functional areas of impact and alignment. Managing and mitigating risks through assessment, demonstrating leadership in managing risks, and strengthening controls will be critical. You are expected to have a comprehensive understanding of organizational functions to contribute to achieving business goals. Building and maintaining relationships with internal and external stakeholders, using influencing and negotiating skills to achieve outcomes, is also part of your responsibilities. All colleagues are expected to uphold the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship, along with the Barclays Mindset to Empower, Challenge, and Drive. These values and mindset serve as our moral compass and operating manual for behavior.,

Posted 2 weeks ago

Apply

2.0 - 7.0 years

8 - 18 Lacs

pune

Hybrid

Role: Data Engineer II A growing AdTech organization is seeking a Data Engineer II to join a small, highly influential Data Engineering team. In this role, you will be responsible for evolving and optimizing high-volume, low-latency data pipeline architecture, as well as improving data flow and collection processes across multiple teams. The ideal candidate is an experienced data pipeline builder and data wrangler who thrives on optimizing systems and building scalable solutions from the ground up. You will support software engineers, product managers, business intelligence analysts, and data scientists on data initiatives while ensuring consistent application of best practices for data delivery architecture across projects. This role is well-suited for self-starters eager to optimize or re-design modern data architectures to support next-generation products and initiatives. Responsibilities: Design and maintain high-throughput data platform architecture handling hundreds of billions of daily events. Explore, refine, and assemble large, complex data sets aligned with business and product needs. Identify and implement internal process improvements, such as automation, data delivery optimization, and infrastructure re-design for scalability. Build infrastructure for optimal extraction, transformation, and loading (ETL) of data from diverse sources using Spark, EMR, Snowpark, Kafka, and related technologies. Collaborate with stakeholders across distributed teams to resolve data-related issues and support infrastructure requirements. Translate business requirements into clear technical solutions for both technical and non-technical audiences. Qualifications: 2+ years of experience in a Data Engineer role. Bachelors degree (or higher) in Computer Science or related Engineering field. Proven experience building and optimizing big data pipelines, architectures, and datasets. Strong working knowledge of Databricks/Spark and associated APIs. Proficiency in programming/scripting with Python, Java, or Scala . Experience with relational databases, SQL authoring/optimization, and working across multiple database technologies. Hands-on experience with AWS cloud services such as EC2, EMR, and RDS. Familiarity with NoSQL data stores (e.g., Elasticsearch, Apache Druid). Experience with data pipeline and workflow management tools such as Airflow . Ability to perform root cause analysis on complex data and processes to solve business problems and identify improvements. Strong experience with unstructured and semi-structured data formats (JSON, Parquet, Iceberg, Avro, Protobuf). Deep knowledge of data transformation, data structures, metadata, dependency, and workload management. Demonstrated ability to process, manipulate, and extract insights from large datasets. Working knowledge of stream processing, message queuing, and highly scalable big data storage systems. Experience collaborating with cross-functional teams in fast-paced environments. Preferred Skills: Experience with streaming systems such as Kafka, Spark Streaming, or Kafka Streams . Knowledge of Snowflake/Snowpark . Familiarity with DBT . Exposure to AdTech industry data and systems. Thanks & Regards, Gloria Dias Research Associate | LH Gloria.Dias@persolapac.com persolindia.com Pune, India CONFIDENTIAL NOTE: This e-mail and any attachments may contain confidential information. If you are not the intended recipient, please notify the sender immediately and delete this message. Unauthorized use or distribution of this communication is strictly prohibited. By submitting your curriculum vitae or other personal data to us in connection with your job application or in your capacity as our employee, contractor, associate, partner or vendor, you acknowledge that you have carefully read and agreed to the terms of our Privacy Policy and the consent notice thereunder. You hereby provide voluntary consent to the collection, use, processing and disclosure of your personal data by us and our affiliates, in accordance with and for the purposes set out in our Privacy Policy and for other legitimate purposes as specified under applicable law. Your submission of personal data via email implies that you have not expressly dissented to the processing of personal data for the stated purpose. For a detailed understanding of our data collection practices, please refer to our Privacy Policy accessible here. If at any time, you wish to expressly withdraw your consent or have any grievance, you can do so by submitting a request to our designated consent manager, as provided in our Privacy Policy. Your privacy is of utmost importance, and we are committed to address the queries you have in this regard. SECURITY NOTE: We at PERSOL India or our representatives, do not ask job seekers for fees, personal banking information, or payments through unofficial channels. Official communications will only come from @persolapac.com. Report any suspicious activity to Contactus.in@persolapac.com. Click here to find out how you can safeguard yourself from job scams.

Posted 2 weeks ago

Apply

12.0 - 16.0 years

0 Lacs

karnataka

On-site

As an Advisory Consultant at Dell Technologies, you will play a crucial role in delivering consultative business and technical services for complex customer-facing consulting engagements related to digital transformation. Your responsibilities will involve collaborating with Global Pre-Sales, Account Management, and Solutioning Teams to deploy, administer, and configure digital transformation software stacks. Being one of the senior technical members in the Digital Transformation Practice, you will earn customer trust through competence, technical acumen, consulting expertise, and partnership. You will guide and oversee other team members, providing technical grooming activities for their skill development. Your role will require expert customer-facing skills, leadership qualities, and the ability to communicate technical processes effectively. Your key responsibilities will include exploring customers" Data and Analytics opportunities, driving digital transformation within customer organizations, architecting unified Data Management strategies, and implementing end-to-end data engineering pipelines. Additionally, you will collaborate with various stakeholders to support deal closures and contribute to the growth of the practice. To excel in this role, you should have over 12 years of experience in the IT industry, preferably with a degree in computer science or engineering. You must possess a minimum of 5 years of hands-on experience with big data technologies like Hadoop and Spark, strong programming skills in languages such as Python, Java, or Scala, and proficiency in SQL and query optimization. Experience in developing cloud-based applications, working with different databases, and familiarity with message formats and distributed querying solutions will be essential for success. Desirable qualifications include experience with containerization technologies like Docker and Kubernetes, as well as engaging with pre-sales and sales teams to create solutions for customers seeking digital transformation and AI/Edge solutions. At Dell Technologies, we believe in the power of each team member to make a significant impact. If you are eager to grow your career with cutting-edge technology and join a diverse and innovative team, we invite you to be part of our journey to build a future that benefits everyone. Application closing date: 1 May 2025,

Posted 2 weeks ago

Apply

0.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Company Overview: ? FTSE Russell, part of the London Stock Exchange Group, is an essential index partner for a changing world, providing category-defining indices across asset classes and investment objectives to create new possibilities for the global investment community. FTSE Russells expertise and products are used extensively by institutional and retail investors globally. Job Summary: ?In this key leadership role, you will lead the development of foundational components for a lakehouse architecture on AWS and drive the migration of existing data processing workflows to the new lakehouse solution. You will work across the Data Engineering organisation to design and implement scalable data infrastructure and processes using technologies such as Python, PySpark, EMR Serverless, Iceberg, Glue and Glue Data Catalog. The main goal of this position is to ensure successful migration and establish robust data quality governance across the new platform, enabling reliable and efficient data processing. Success in this role requires deep technical expertise, exceptional problem-solving skills, and the ability to lead and mentor within an agile team. Key Accountabilities: Project Leadership and Culture Building: Leads complex projects autonomously, fostering an inclusive and open culture within development teams. Sets a high standard for technical contributions while promoting a collaborative environment that encourages knowledge sharing and innovation. Technical Expertise and Advisory: Demonstrates profound technical knowledge of AWS data services and engineering practices. Provides strategic guidance on best practices in design, development, and implementation, ensuring solutions meet business requirements and technical standards. Data Development and Tool Advancement: Writes high-quality, efficient code and develops necessary tools and applications to address complex business needs. Leads the development of innovative tools and frameworks to enhance data engineering capabilities. Solution Decomposition and Design Leadership: Collaborates closely with architects, Product Owners, and Dev team members to decompose solutions into Epics, leading the design and planning of these components. Drive the migration of existing data processing workflows to the Lakehouse architecture, leveraging Iceberg capabilities. Establish and enforce best practices for coding standards, design patterns, and system architecture. Utilizes existing design patterns to develop reliable solutions, while also recognizing when to adapt or avoid patterns to prevent anti-patterns. Stakeholder Relationship Building and Communication: Builds and maintains strong relationships with internal and external stakeholders, collaborating across teams. Serves as an internal subject matter expert in software development, advising stakeholders on technical issues and best practices. Applies deep technical expertise to assess complex challenges and propose strategic solutions. Lead technical discussions, mentor team members, and foster a culture of continuous learning and innovation. Qualifications and Experience: Bachelors degree in computer science, Software Engineering, or a related field is essential. A masters degree or relevant certifications (e.g., AWS Certified Solutions Architect, Certified Data Analytics) is advantageous. Advanced Programming Proficiency:Deep technical knowledge of data engineering solutions and practices. Implementation of data pipelines using tools like AWS Glue, AWS Lambda, and AWS Step Functions. Proficient in Python and familiar with a variety of development technologies. This knowledge enables the Principal Data Engineer to adapt solutions to project-specific needs, apply best practices, and identify when patterns are appropriate or should be avoided. System Architecture and Solution Design: Extensive experience in software architecture and solution design, including microservices, distributed systems, and cloud-native architectures. Capable of breaking down large-scale projects into manageable components, ensuring scalability, security, and alignment with strategic objectives. Key Skills: Advanced Software Development Practices: Demonstrates mastery of best practices in software development, including knowledge of object-oriented programming, functional programming, and design patterns. Skilled at implementing complex coding structures and promoting efficient, maintainable code across projects. Advanced expertise in Python and Spark: Specialized expertise in Python and Spark, with a deep focus on ETL data processing and data engineering practices. Ability to provide technical direction, set high standards for code quality and optimize performance in data-intensive environments. Knowledge of additional programming languages and development tools provides flexibility and adaptability across varied data engineering projects. Automation and CI/CD Pipelines: Skilled in leveraging automation tools and Continuous Integration/Continuous Deployment (CI/CD) pipelines to streamline development, testing, and deployment. Experienced in setting up and optimising CI/CD processes to ensure rapid, high-quality releases and minimise manual intervention. Cross-Functional Collaboration and Communication: Exceptional communicator who can translate complex technical concepts for diverse stakeholders, including engineers, product managers, and senior executives. Skilled in building alignment and driving consensus, ensuring that technical decisions support broader business goals. Technical Leadership and Mentorship: Provides thought leadership within the engineering team, setting high standards for quality, efficiency, and collaboration. Experienced in mentoring engineers, guiding them in advanced coding practices, architecture, and strategic problem-solving to enhance team capabilities. Domain Expertise in AWS Cloud Services: Solid understanding of AWS services and cloud solutions, particularly as they pertain to data engineering practices. Familiar with AWS solutions including IAM, Step Functions, Glue, Lambda, RDS (e.g., DynamoDB, Aurora Postgres), SQS, API Gateway, Athena. Quality Assurance and Continuous Improvement: Proficient in quality assurance practices, including code reviews, automated testing, and best practices for data validation. Committed to continuous improvement, implementing methods that enhance data reliability, performance, and user satisfaction. Bonus Skills: Financial Services expertise preferred, working with Equity and Fixed Income asset classes and a working knowledge of Indices. Experienced in implementing and optimizing CI/CD pipelines. Skilled at setting up processes that enable rapid, reliable releases, minimizing manual effort and supporting agile development cycles. Other: LSEG champions a culture committed to the growth of individuals through continuous learning, mentoring and career growth opportunities. LSEG champions a culture of inclusion for all employees that respects their individual strengths, views, and experiences. We believe that our differences enable us to be a better team one that makes better decisions, drives innovation, and delivers better business results. Diversity is a core value at LSEG. We are passionate about building and sustaining an inclusive and equitable working and learning environment for all. We believe every member on our team enriches our diversity by exposing us to a broad range of ways to understand and engage with the world, identify challenges, and to discover, craft and deliver solutions. LSEG is a leading global financial markets infrastructure and data provider. Our purpose is driving financial stability, empowering economies and enabling customers to create sustainable growth. Our purpose is the foundation on which our culture is built. Our values of Integrity, Partnership , Excellence and Change underpin our purpose and set the standard for everything we do, every day. They go to the heart of who we are and guide our decision making and everyday actions. Working with us means that you will be part of a dynamic organisation of 25,000 people across 65 countries. However, we will value your individuality and enable you to bring your true self to work so you can help enrich our diverse workforce. You will be part of a collaborative and creative culture where we encourage new ideas and are committed to sustainability across our global business. You will experience the critical role we have in helping to re-engineer the financial ecosystem to support and drive sustainable economic growth. Together, we are aiming to achieve this growth by accelerating the just transition to net zero, enabling growth of the green economy and creating inclusive economic opportunity. LSEG offers a range of tailored benefits and support, including healthcare, retirement planning, paid volunteering days and wellbeing initiatives. We are proud to be an equal opportunities employer. This means that we do not discriminate on the basis of anyones race, religion, colour, national origin, gender, sexual orientation, gender identity, gender expression, age, marital status, veteran status, pregnancy or disability, or any other basis protected under applicable law. Conforming with applicable law, we can reasonably accommodate applicants' and employees' religious practices and beliefs, as well as mental health or physical disability needs. Please take a moment to read this privacy notice carefully, as it describes what personal information London Stock Exchange Group (LSEG) (we) may hold about you, what its used for, and how its obtained, your rights and how to contact us as a data subject. If you are submitting as a Recruitment Agency Partner, it is essential and your responsibility to ensure that candidates applying to LSEG are aware of this privacy notice. Show more Show less

Posted 2 weeks ago

Apply

8.0 - 10.0 years

0 Lacs

bengaluru, karnataka, india

Remote

As a global leader in cybersecurity, CrowdStrike protects the people, processes and technologies that drive modern organizations. Since 2011, our mission hasn't changed - we're here to stop breaches, and we've redefined modern security with the world's most advanced AI-native platform. We work on large scale distributed systems, processing almost 3 trillion events per day and this traffic is growing daily . Our customers span all industries, and they count on CrowdStrike to keep their businesses running, their communities safe and their lives moving forward. We're also a mission-driven company. We cultivate a culture that gives every CrowdStriker both the flexibility and autonomy to own their careers. We're always looking to add talented CrowdStrikers to the team who have limitless passion, a relentless focus on innovation and a fanatical commitment to our customers, our community and each other. Ready to join a mission that matters The future of cybersecurity starts with you. About the Role: The charter of the ML Platform team is to harness all the data that is ingested and cataloged within the Data LakeHouse for exploration, insights, model development, ML Engineering and Insights Activation. This team is situated within the larger Data Platform group, which serves as one of the core pillars of our company. We process data at a truly immense scale. Our processing is composed of various facets including threat events collected via telemetry data, associated metadata, along with IT asset information, contextual information about threat exposure based on additional processing, etc. These facets comprise the overall data platform, which is currently over 200 PB and maintained in a hyper scale Data Lakehouse, built and owned by the Data Platform team. The ingestion mechanisms include both batch and near real-time streams that form the core Threat Analytics Platform used for insights, threat hunting, incident investigations and more. As an engineer in this team, you will play an integral role as we build out our ML Experimentation Platform from the ground up. You will collaborate closely with Data Platform Software Engineers, Data Scientists & Threat Analysts to design, implement, and maintain scalable ML pipelines that will be used for Data Preparation, Cataloging, Feature Engineering, Model Training, and Model Serving that influence critical business decisions. You'll be a key contributor in a production-focused culture that bridges the gap between model development and operational success. Future plans include generative AI investments for use cases such as modeling attack paths for IT assets. Candidates must be comfortable to visit office once a week. What You'll Do: Help design, build, and facilitate adoption of a modern ML platform Modularize complex ML code into standardized and repeatable components Establish and facilitate adoption of repeatable patterns for model development, deployment, and monitoring Build a platform that scales to thousands of users and offers self-service capability to build ML experimentation pipelines Leverage workflow orchestration tools to deploy efficient and scalable execution of complex data and ML pipelines Review code changes from data scientists and champion software development best practices Leverage cloud services like Kubernetes, blob storage, and queues in our cloud first environment What You'll Need: B.S. in Computer Science, Data Science, Statistics, Applied Mathematics, or a related field and 10+ years related experience or M.S. with 8+ years of experience or Ph.D with 6+ years of experience. 3+ years experience developing and deploying machine learning solutions to production. Familiarity with typical machine learning algorithms from an engineering perspective (how they are built and used, not necessarily the theory) familiarity with supervised / unsupervised approaches: how, why, and when and labeled data is created and used 3+ years experience with ML Platform tools like Jupyter Notebooks, NVidia Workbench, MLFlow, Ray, Vertex AI etc. Experience building data platform product(s) or features with (one of) Apache Spark, Flink or comparable tools in GCP. Experience with Iceberg is highly desirable. Proficiency in distributed computing and orchestration technologies (Kubernetes, Airflow, etc.) Production experience with infrastructure-as-code tools such as Terraform, FluxCD Expert level experience with Python Java/Scala exposure is recommended. Ability to write Python interfaces to provide standardized and simplified interfaces for data scientists to utilize internal Crowdstrike tools Expert level experience with CI/CD frameworks such as GitHub Actions Expert level experience with containerization frameworks Strong analytical and problem solving skills, capable of working in a dynamic environment Exceptional interpersonal and communication skills. Work with stakeholders across multiple teams and synthesize their needs into software interfaces and processes. Critical Skills Needed for Role: Distributed Systems Knowledge Data Platform Experience Machine Learning concepts Experience with the Following is Desirable: Go Iceberg Pinot or other time-series/OLAP-style database Jenkins Parquet Protocol Buffers/GRPC #LI-DP1 #LI-VJ1 Benefits of Working at CrowdStrike: Remote-friendly and flexible work culture Market leader in compensation and equity awards Comprehensive physical and mental wellness programs Competitive vacation and holidays for recharge Paid parental and adoption leaves Professional development opportunities for all employees regardless of level or role Employee Networks, geographic neighborhood groups, and volunteer opportunities to build connections Vibrant office culture with world class amenities Great Place to Work Certified across the globe CrowdStrike is proud to be an equal opportunity employer. We are committed to fostering a culture of belonging where everyone is valued for who they are and empowered to succeed. We support veterans and individuals with disabilities through our affirmative action program. CrowdStrike is committed to providing equal employment opportunity for all employees and applicants for employment. The Company does not discriminate in employment opportunities or practices on the basis of race, color, creed, ethnicity, religion, sex (including pregnancy or pregnancy-related medical conditions), sexual orientation, gender identity, marital or family status, veteran status, age, national origin, ancestry, physical disability (including HIV and AIDS), mental disability, medical condition, genetic information, membership or activity in a local human rights commission, status with regard to public assistance, or any other characteristic protected by law. We base all employment decisions--including recruitment, selection, training, compensation, benefits, discipline, promotions, transfers, lay-offs, return from lay-off, terminations and social/recreational programs--on valid job requirements. If you need assistance accessing or reviewing the information on this website or need help submitting an application for employment or requesting an accommodation, please contact us at for further assistance.

Posted 2 weeks ago

Apply

5.0 - 7.0 years

0 Lacs

pune, maharashtra, india

On-site

Druva is the leading provider of data security solutions, empowering customers to secure and recover their data from all threats. The Druva Data Security Cloud is a fully managed SaaS solution offering air-gapped and immutable data protection across cloud, on-premises, and edge environments. By centralizing data protection, Druva enhances traditional security measures and enables faster incident response, effective cyber remediation, and robust data governance. Trusted by nearly 7,500 customers, including 75 of the Fortune 500, Druva safeguards business data in an increasingly interconnected world. Visit and follow us on , and . As a Senior Staff Software Engineer, you will be providing technical leadership to create high-quality software by owning low level , design and implementation of services within a product. This role will require excellent communication skills as you will collaborate with Product Management to refine requirements, product architects to propose design changes, and other product owners to drive features to completion with good quality. Key Skill AI first mindset to software development, having experience using genAI during various phases of software development lifecycle from design to code to test using tools like cursor 5-7 years of experience, preferably in a product company, building global scale distributed SaaS applications that handle petabytes of data. Hands-on experience in the design and development of complex products Extensive hands-on experience in Go/Python/C/C++/Java on Unix/Linux platforms. A strong understanding of complex concepts related to computer architecture, data structures, algorithms, design concepts, and programming practices. Data modelling for OLAP workloads, Scalability design and query optimisations Understanding of data consistency at cloud scale, eventual consistency models Hands on experience with Big data tools and frameworks (Datalake / Lakehouse, ETL) preferably in public cloud ecosystems like AWS and modules like Apache Spark, AWS Glue, Iceberg. Desirable Skills: Excellent written and verbal communication skills Working knowledge of Dockers and Kubernetes will be an advantage Role and Responsibilities: The Senior Software Engineer's role is to be the technical leader in building enterprise-grade scalable, performant systems which deliver the required functionality to the customers and delight them Should be able to design and implement sufficiently large and complex features and/or architectural improvements to the product. Suggest and propose solutions to complex design problems. Identify areas of engineering improvements to the product and work with product architects and the team to address them. Should be able to technically guide junior engineers with feature design and implementation. Review design and implementation done by junior engineers. Should be able to independently handle complex escalations and guide others as required. Be able to write technical blogs and make technical presentations in internal and external forums Qualification B.E / B Tech M.E./ M.Tech (Computer Science) or equivalent

Posted 2 weeks ago

Apply

8.0 - 12.0 years

45 - 50 Lacs

bengaluru

Work from Office

Role Overview: We are seeking an experienced and visionary AI & Distributed Systems Architect to lead the design and implementation of large-scale, AI-driven distributed systems. The successful candidate will pioneer transformative AI solutions, including leveraging Generative AI, Machine Learning (ML), sophisticated agentic workflows, and context-aware data platforms. You will play a crucial role in defining architecture strategies, driving innovation, and leading technical teams to deliver cutting-edge capabilities at scale. Key Responsibilities: Lead the architecture design, development, and deployment of AI-powered distributed systems. Develop scalable, reliable, and high-performance solutions using Generative AI, ML models, and advanced data platforms. Implement agentic workflows incorporating AI agents and context-memory solutions, utilizing technologies such as Typesense and vector search platforms. Collaborate closely with product, engineering, and data science teams to integrate AI/ML functionalities seamlessly across systems. Drive technical excellence, mentoring teams on best practices and innovative architectural approaches. Ensure robust security, compliance, and ethical standards are embedded within AI architectures. Continuously evaluate emerging technologies and methodologies to improve system performance and scalability. Qualifications: Bachelor's/Masters degree in Computer Science, AI, Data Science, or a related field. 8+ years of experience in architecting and implementing large-scale distributed systems. Proven expertise in Generative AI, LLMs (Large Language Models), AI/ML workflows, and realtime data platforms. Hands-on experience with agentic workflows, context-memory implementations, and technologies like Typesense, Elasticsearch, or equivalent vector search platforms. Strong proficiency in cloud-native technologies, Kubernetes, streaming platforms (Kafka/Pulsar), and data storage solutions (Iceberg, Delta Lake). Excellent leadership skills, with a demonstrated ability to mentor and guide technical teams. Exceptional communication skills, capable of articulating complex technical concepts clearly and effectively. Preferred Skills: Experience working in AI-first product environments. Familiarity with multi-tenant SaaS architectures. Prior experience disrupting traditional ITSM/ITAM workflows using AI. Contributions to open-source AI or data community projects. Join us to build a next-generation AI platform poised to revolutionize industry workflows through cutting-edge technology and innovation.

Posted 2 weeks ago

Apply

6.0 - 10.0 years

30 - 35 Lacs

bengaluru

Work from Office

We are seeking an experienced PySpark Developer / Data Engineer to design, develop, and optimize big data processing pipelines using Apache Spark and Python (PySpark). The ideal candidate should have expertise in distributed computing, ETL workflows, data lake architectures, and cloud-based big data solutions. Key Responsibilities: Develop and optimize ETL/ELT data pipelines using PySpark on distributed computing platforms (Hadoop, Databricks, EMR, HDInsight). Work with structured and unstructured data to perform data transformation, cleansing, and aggregation. Implement data lake and data warehouse solutions on AWS (S3, Glue, Redshift), Azure (ADLS, Synapse), or GCP (BigQuery, Dataflow). Optimize PySpark jobs for performance tuning, partitioning, and caching strategies. Design and implement real-time and batch data processing solutions. Integrate data pipelines with Kafka, Delta Lake, Iceberg, or Hudi for streaming and incremental updates. Ensure data security, governance, and compliance with industry best practices. Work with data scientists and analysts to prepare and process large-scale datasets for machine learning models. Collaborate with DevOps teams to deploy, monitor, and scale PySpark jobs using CI/CD pipelines, Kubernetes, and containerization. Perform unit testing and validation to ensure data integrity and reliability. Required Skills & Qualifications: 6+ years of experience in big data processing, ETL, and data engineering. Strong hands-on experience with PySpark (Apache Spark with Python). Expertise in SQL, DataFrame API, and RDD transformations. Experience with big data platforms (Hadoop, Hive, HDFS, Spark SQL). Knowledge of cloud data processing services (AWS Glue, EMR, Databricks, Azure Synapse, GCP Dataflow). Proficiency in writing optimized queries, partitioning, and indexing for performance tuning. Experience with workflow orchestration tools like Airflow, Oozie, or Prefect. Familiarity with containerization and deployment using Docker, Kubernetes, and CI/CD pipelines. Strong understanding of data governance, security, and compliance (GDPR, HIPAA, CCPA, etc.). Excellent problem-solving, debugging, and performance optimization skills.

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

Join us as a Solution Architect at Barclays, where you will play a crucial role in supporting the successful delivery of location strategy projects. Your responsibilities will include ensuring projects are completed within planned budgets, meeting quality standards, and adhering to governance procedures. You will lead the evolution of our digital landscape, driving innovation and excellence to enhance our digital offerings and provide exceptional customer experiences. To excel in this role, you should possess expertise in the following areas: - Advanced Ab>Initio and ETL Experience - AWS Architecture, Glue, S3, Iceberg - ETL and Warehouse Architecture - SQL and RDBMS Knowledge - Unix, Linux, Python wrapper - Experience in Oracle and Teradata Additionally, highly valued skills and key accountabilities include: - DBT - Snowflake, Databricks - Understanding of Agile and Jira Concepts Your performance may be evaluated based on critical skills essential for success in this role, such as risk management, change implementation, business acumen, strategic thinking, and digital technology proficiency. The position is based in Chennai. **Purpose of the Role:** Your primary objective will be to design, develop, and implement solutions for complex business problems. By collaborating with stakeholders to understand their needs, you will create solutions that align with modern software engineering practices, balancing technology risks with business requirements and driving consistency. **Accountabilities:** - Design and develop solutions that can evolve as products, meeting business needs and utilizing modern software engineering practices. - Implement technologies and platforms, ensuring appropriate workload placement strategies and maximizing cloud capabilities. - Create designs that prioritize security principles and meet the bank's resiliency expectations. - Balance risks and controls to deliver business and technology value. - Support operational teams in fault finding and addressing performance issues. - Evaluate solution design impact in terms of risk, capacity, and cost. - Develop architecture inputs to comply with the bank's governance processes. **Assistant Vice President Expectations:** Your role will involve advising on decision-making, contributing to policy development, and ensuring operational effectiveness. If leading a team, you will set objectives, coach employees, and drive performance outcomes. Leadership behaviours include Listening, Energizing, Aligning, and Developing others. As an individual contributor, you will guide team members, identify new directions for assignments, and consult on complex issues. All colleagues are expected to embody the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship, as well as the Barclays Mindset of Empower, Challenge, and Drive in their everyday interactions and work.,

Posted 3 weeks ago

Apply
Page 1 of 3
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies