Jobs
Interviews

48 Trino Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 5.0 years

0 Lacs

bengaluru, karnataka, india

On-site

About Lowes Lowes Companies, Inc. (NYSE: LOW) is a FORTUNE 50 home improvement company serving approximately 16 million customer transactions a week in the United States. With total fiscal year 2023 sales of more than $86 billion, Lowes operates over 1,700 home improvement stores and employs approximately 300,000 associates. Based in Bengaluru, Lowes India develops innovative technology products and solutions and delivers business capabilities to provide the best omnichannel experience for Lowes customers. Lowes India employs over 4,200 associates across technology, analytics, merchandising, supply chain, marketing, finance and accounting, product management and shared services. Lowes India actively supports the communities it serves through programs focused on skill-building, sustainability and safe homes. For more information, visit, www.lowes.co.in. About The Team Lowe&aposs forecasting platform team is responsible for predicting future trends, outcomes, or events based on current and historical data. The primary goal is to generate AI/ML forecasts that help the business plan for future demand, optimize resources, reduce risk, and make data-driven decisions. Job Summary The primary purpose of this role is to develop an artificial intelligence (AI) platform that supports a wide array of machine learning (ML) models, including sophisticated deep learning frameworks and large language models (LLMs). This role will work on scaling model performance, building essential tools and frameworks, and managing compute and storage resources. The role involves close collaboration with cross-functional teams to identify new opportunities leveraging AI platform capabilities across different domains to accelerate AI infused product development. Roles & Responsibilities Scales the platform for high performance and integrates new AI capabilities as APIs to ensure the platform remains adaptable and efficient in hosting a variety of ML models. Designs, develops, and implements tools and frameworks that support ML experimentation and deployment. Manages GPU and CPU resources to optimize the execution of AI models to ensure the platform runs efficiently, balancing performance with cost-effectiveness. Works closely with data scientists to integrate AI models smoothly into platform. Creates and manages efficient data movement and pipelines for the AI platform to operate smoothly. Optimizes data flows to support the demands of high-velocity AI model training and inference. Analyzes platform performance metrics and user feedback to drive continuous improvement initiatives. Utilizes insights to guide platform enhancements, ensuring the AI platform remains at the forefront of technological advancements and user satisfaction. Collaborates effectively with diverse teams, integrating technical expertise with business insights and user needs. Implements security protocols and governance measure for AI platform, ensuring data integrity and compliance with industry standards and best practices. Years Of Experience 2-5 years of overall work experience in AI Engineering. Education Qualification & Certifications Bachelors Degree (Science, Technology, Engineering, Math or related field) Skill Set Required Experience in AI/ML Platform Engineering, Data, and ML Operations tools and frameworks. Experience working with GPU and CPU Infrastructure, optimizing ML models for performance. Programming experience in Python or equivalent. Experience working with Continuous Integration/Continuous Deployment tools Experience in defining technical requirements and performing high level design for complex solutions Experience in SQL and NoSQL databases, Hadoop ecosystem, Druid, Trino, Big Query, Google Vertex AI. Lowe&aposs is an equal opportunity employer and administers all personnel practices without regard to race, color, religious creed, sex, gender, age, ancestry, national origin, mental or physical disability or medical condition, sexual orientation, gender identity or expression, marital status, military or veteran status, genetic information, or any other category protected under federal, state, or local law. Starting rate of pay may vary based on factors including, but not limited to, position offered, location, education, training, and/or experience. For information regarding our benefit programs and eligibility, please visit https://talent.lowes.com/us/en/benefits. Show more Show less

Posted 3 days ago

Apply

8.0 - 12.0 years

0 Lacs

hyderabad, telangana

On-site

As a Distributed Systems Software Engineer at Salesforce, you will lead Big Data infrastructure projects to develop reliable and efficient data infrastructure. Your role is crucial in supporting scalable data processing and analytics for both internal and external customers. Key Responsibilities: - Build Data Processing and Analytics Services: Develop scalable services using Spark, Trino, Airflow, and Kafka to support real-time and batch data workflows. - Architect Distributed Systems: Design resilient systems across multiple data centers for scalability and high availability. - Troubleshoot and Innovate: Resolve technical challenges and drive innovations for system resilience, availability, and performance. - Service Ownership and Live-Site Management: Manage services lifecycle focusing on reliability, feature development, and performance. - On-Call Support: Participate in on-call rotation to address real-time issues and maintain critical services availability. - Mentor and Guide Team Members: Provide mentorship and technical guidance to foster growth and collaboration within the team. Qualifications Required: - Education and Experience: Bachelor's or Master's in Computer Science, Engineering, or related field with 8+ years of experience in distributed systems or big data roles. - Cloud Environments Experience: Proficiency in AWS, GCP, Azure, Docker, Kubernetes, and Terraform. - Expertise in Big Data Technologies: Hands-on experience with Hadoop, Spark, Trino, Airflow, Kafka, and related ecosystems. - Programming Proficiency: Strong skills in Python, Java, Scala, or other relevant languages. - Distributed Systems Knowledge: Understanding of distributed computing principles, fault tolerance, and performance tuning. - Analytical and Problem-Solving Skills: Ability to troubleshoot complex system issues and optimize for efficiency and scale. This role at Salesforce offers a comprehensive benefits package, including well-being reimbursement, generous parental leave, adoption assistance, fertility benefits, and more. Additionally, you will have access to world-class training, exposure to executive thought leaders, volunteer opportunities, and participation in the company's giving back initiatives. Visit [Salesforce Benefits](https://www.salesforcebenefits.com/) for more details.,

Posted 4 days ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a candidate for the role in the Unified Intelligence Platform (UIP) team, you will be part of a mission to enable Salesforce teams to deeply understand and optimize their services and operations through data. The UIP is a modern, cloud-based data platform built with cutting-edge technologies like Spark, Trino, Airflow, DBT, Jupyter Notebooks, and more. Here are the details of the job responsibilities and qualifications we are looking for: **Role Overview:** You will be responsible for leading the architecture, design, development, and support of mission-critical data and platform services. Your role will involve driving self-service data pipelines, collaborating with product management teams, and architecting robust data solutions that enhance ingestion, processing, and quality. Additionally, you will be involved in promoting a service ownership model, developing data frameworks, implementing data quality services, building Salesforce-integrated applications, establishing CI/CD processes, and maintaining key components of the UIP technology stack. **Key Responsibilities:** - Lead the architecture, design, development, and support of mission-critical data and platform services - Drive self-service, metadata-driven data pipelines, services, and applications - Collaborate with product management and client teams to deliver scalable solutions - Architect robust data solutions with security and governance - Promote a service ownership model with telemetry and control mechanisms - Develop data frameworks and implement data quality services - Build Salesforce-integrated applications for data lifecycle management - Establish and refine CI/CD processes for seamless deployment - Oversee and maintain components of the UIP technology stack - Collaborate with third-party vendors for issue resolution - Architect data pipelines optimized for multi-cloud environments **Qualifications Required:** - Passionate about tackling big data challenges in distributed systems - Highly collaborative and adaptable, with a strong foundation in software engineering - Committed to engineering excellence and fostering transparency - Embraces a growth mindset and actively engages in support channels - Champions a Service Ownership model and minimizes operational overhead through automation - Experience with advanced data lake engines like Spark and Trino is a plus This is an opportunity to be part of a fast-paced, agile, and highly collaborative team that is defining the next generation of trusted enterprise computing. If you are passionate about working with cutting-edge technologies and solving complex data challenges, this role might be the perfect fit for you.,

Posted 4 days ago

Apply

0.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Job Category Software Engineering Job Details About Salesforce Salesforce is the #1 AI CRM, where humans with agents drive customer success together. Here, ambition meets action. Tech meets trust. And innovation isn't a buzzword - it's a way of life. The world of work as we know it is changing and we're looking for Trailblazers who are passionate about bettering business and the world through AI, driving innovation, and keeping Salesforce's core values at the heart of it all. Ready to level-up your career at the company leading workforce transformation in the agentic era You're in the right place! Agentforce is the future of AI, and you are the future of Salesforce. About the Role The mission of is to enable Salesforce teams to deeply understand and optimize their services, operations through data. UIP is a modern, trusted, turn-key data platform built with cutting-edge technologies with an exceptional user experience. Massive amounts of data are generated each day at Salesforce. It is critical to process and store large volumes of data efficiently and enable the users to discover and analyze the data easily. UIP is a modern, cloud-based data platform built on advanced data lake engines like Spark and Trino, incorporating a diverse suite of tools and technologies-including Airflow, DBT, Jupyter Notebooks, Sagemaker, Iceberg, and Open Metadata-for efficient data processing, storage, querying, and management. With curated datasets, we empower machine learning and AI use cases, enabling both model development and inference. Our team is fast-paced, agile, and highly collaborative, working across all areas of our tech stack to provide critical business services, support complex computing requirements, drive big data analytics, and pioneer cutting-edge engineering solutions in the cloud, defining the next generation of trusted enterprise computing. Who are we looking for Passionate about tackling big data challenges in distributed systems. Highly collaborative, working across teams to ensure customer success. Drive end-to-end projects that deliver high-performance, scalable, and maintainable solutions. Adaptable and versatile, taking on multiple roles as needed, whether as a Platform Engineer, Data engineer, Backend engineer, DevOps engineer, or support engineer. for the platform and customer success Strong foundation in software engineering, with the flexibility to work in any programming language. Committed to engineering excellence, consistently delivering high-quality products. Open and respectful communicator, fostering transparency and team alignment. Embraces a growth mindset, continuously learning and seeking self-improvement. Engages actively in support channels, providing insights and collaborating to support the community. Champions a Service Ownership model, minimizing operational overhead through automation, monitoring, and alerting best practices. Job Responsibilities: Lead the architecture, design, development, and support of mission-critical data and platform services, ensuring full ownership and accountability. Drive multiple self-service, metadata-driven data pipelines, services, and applications to streamline ingestion from diverse data sources into a multi-cloud, petabyte-scale data platform. Collaborate closely with product management and client teams to capture requirements and deliver scalable, adaptable solutions that drive success. Architect robust data solutions that enhance ingestion, processing, quality, and discovery, embedding security and governance from the start. Promote a service ownership model, designing solutions with extensive telemetry and control mechanisms to streamline governance and operational management. Develop data frameworks to simplify recurring data tasks, ensure best practices, foster consistency, and facilitate tool migration. Implement advanced data quality services seamlessly within the platform, empowering data analysts, engineers, and stewards to continuously monitor and uphold data standards. Build Salesforce-integrated applications to monitor and manage the full data lifecycle from a unified interface. Establish and refine CI/CD processes for seamless deployment of platform services across cloud environments. Oversee and maintain key components of the UIP technology stack, including Airflow, Spark, Trino, Iceberg, and Kubernetes. Collaborate with third-party vendors to troubleshoot and resolve platform-related software issues. Architect and orchestrate data pipelines and platform services optimized for multi-cloud environments (e.g., AWS, GCP). Unleash Your Potential When you join Salesforce, you'll be limitless in all areas of your life. Our benefits and resources support you to find balance and , and our AI agents accelerate your impact so you can . Together, we'll bring the power of Agentforce to organizations of all sizes and deliver amazing experiences that customers love. Apply today to not only shape the future - but to redefine what's possible - for yourself, for AI, and the world. Accommodations If you require assistance due to a disability applying for open positions please submit a request via this . Posting Statement Salesforce is an equal opportunity employer and maintains a policy of non-discrimination with all employees and applicants for employment. What does that mean exactly It means that at Salesforce, we believe in equality for all. And we believe we can lead the path to equality in part by creating a workplace that's inclusive, and free from discrimination. Any employee or potential employee will be assessed on the basis of merit, competence and qualifications - without regard to race, religion, color, national origin, sex, sexual orientation, gender expression or identity, transgender status, age, disability, veteran or marital status, political viewpoint, or other classifications protected by law. This policy applies to current and prospective employees, no matter where they are in their Salesforce employment journey. It also applies to recruiting, hiring, job assignment, compensation, promotion, benefits, training, assessment of job performance, discipline, termination, and everything in between. Recruiting, hiring, and promotion decisions at Salesforce are fair and based on merit. The same goes for compensation, benefits, promotions, transfers, reduction in workforce, recall, training, and education.

Posted 4 days ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

The opportunity at Abnormal AI is to revolutionize cybersecurity by utilizing AI-native technologies to combat modern cyber threats. As a Software Engineer II, you will be part of a team dedicated to leveraging Generative AI tools like Cursor, GitHub Copilot, and Claude to redefine software development processes, making them faster, smarter, and more efficient. Your responsibilities will include leveraging AI-powered development tools to enhance productivity, optimize workflows, and automate tasks. You will also be developing data-driven applications, designing secure and scalable systems to combat cyber threats, collaborating with Fortune 500 enterprises, and building backend services and cloud architectures to support billions of security events globally. We are looking for engineers with at least 3 years of experience in working on data-intensive applications and distributed systems, backend development skills in Python or Go, depth in key areas of the data platform tech stack, familiarity with AI development tools, and experience in building scalable applications. Knowledge of big data technologies, cloud platforms, containerization, computer science fundamentals, and performance optimization is also required. Joining Abnormal AI means working on AI-native security solutions, accelerating development with AI assistance, tackling complex challenges at scale, collaborating with a high-caliber team, and impacting Fortune 500 enterprises by building solutions that protect them from cyber threats. If you are ready to be a part of the AI transformation at Abnormal AI, apply now and participate in the AI-powered Development Challenge to showcase your skills with tools like Cursor and Copilot in building real-world application features. This challenge is a take-home assignment that requires 2-4 hours of work to be completed within one week. #LI-MT1,

Posted 1 week ago

Apply

10.0 - 14.0 years

0 Lacs

hyderabad, telangana

On-site

As a software engineer at Salesforce, you will have the opportunity to work alongside a group of talented engineers to develop innovative features that will have a positive impact on users, the company's success, and the industry as a whole. Your role will involve various aspects of software development including architecture, design, implementation, and testing to ensure the delivery of high-quality products to our customers. You will be responsible for writing maintainable and high-quality code that enhances product stability and usability. In addition to coding, you will also be involved in code reviews, mentoring junior engineers, and providing technical guidance to the team, depending on your level of seniority. We believe in empowering autonomous teams to make decisions that benefit both the individuals and the overall success of the product and company. As a Lead Engineer, your responsibilities will include building new components in a rapidly evolving market, developing production-ready code for millions of users, making design decisions based on performance and scalability, and contributing to all phases of the software development life cycle. You will be working in a Hybrid Engineering model and be responsible for building efficient components and algorithms in a multi-tenant SaaS cloud environment. To be successful in this role, you should have mastery of multiple programming languages and platforms, at least 10 years of software development experience, deep knowledge of object-oriented programming, strong SQL skills, and experience with both relational and non-relational databases. You should also have experience in developing SAAS applications on public cloud infrastructure such as AWS, Azure, or GCP, proficiency in event-driven architecture, and a solid understanding of software development best practices. In addition to the exciting work you'll be doing, Salesforce offers a comprehensive benefits package including parental leave, adoption assistance, and on-demand training with Trailhead.com. You will also have the opportunity to engage with executive thought leaders, receive regular coaching from leadership, and participate in volunteer opportunities as part of our commitment to giving back to the community. For more information on the benefits and perks offered by Salesforce, visit https://www.salesforcebenefits.com/. If you require any accommodations due to a disability when applying for open positions at Salesforce, please submit a request via the Accommodations Request Form. We look forward to welcoming you to our team and embarking on a journey of innovation and growth together.,

Posted 1 week ago

Apply

10.0 - 14.0 years

0 Lacs

haryana

On-site

As a Technical Lead at Affle located in Gurugram, you will play a crucial role in leading the development and delivery of platforms. Your responsibilities will include overseeing the entire product lifecycle, building and leading a team, and collaborating closely with business and product teams. You will work collaboratively with architects, business teams, and developers to provide technical solutions, coordinate with design and analysis teams to enhance Product Requirement Documents, and mentor technical engineers in product development projects. Proficiency in Test Driven Development, Continuous Integration, Continuous Delivery, and Infrastructure automation is essential. Experience with JS stack (ReactJS, NodeJS), Database Engines (MySQL, PostgreSQL), key-value stores (Redis, MongoDB), and distributed data processing technologies is required. You will lead architecture and design reviews, develop reusable frameworks, and establish engineering processes and standards. Collaborating with cross-functional teams to resolve technical challenges, prioritize product backlog, conduct performance reviews, and stay updated with the latest technologies are key aspects of this role. Strong communication skills, Agile methodologies expertise, and the ability to adapt to a dynamic environment are essential. About Affle: Affle is a global technology company specializing in consumer engagement through mobile advertising. The company's platforms aim to enhance marketing returns and reduce ad fraud. Affle India completed its IPO in India in August 2019 and trades on stock exchanges. About BU: Ultra, a platform under Affle, focuses on deals, coupons, and user acquisition to optimize bottom-funnel strategies across various inventory sources. For more information about Affle, visit www.affle.com. To learn more about Ultra, visit https://www.ultraplatform.io/,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Data Platform Engineer- SSE, you will be responsible for designing and implementing foundational frameworks for ingestion, orchestration, schema validation, and metadata management. Your role will involve building robust, scalable pipelines for Change Data Capture (CDC) utilizing Debezium integrated with Kafka and Spark. You will also be developing internal tooling to standardize and accelerate data ingestion, transformation, and publishing workflows. In this position, you will optimize data serving layers powered by Trino, which includes activities such as metadata syncing, security filtering, and performance tuning. Collaboration with SRE and Infra teams is essential to build autoscaling, self-healing, and cost-optimized Spark jobs on AWS EMR. Implementing observability features such as logs, metrics, and alerts for critical platform services and data pipelines will also be part of your responsibilities. Furthermore, you will define and enforce standards for schema evolution, lineage tracking, and data governance. Automation of platform operations using CI/CD pipelines, metadata-driven configurations, and infrastructure will be key aspects of your role. If you have 5-7 years of experience and possess key skills in Data Platform Eng, Java, Spark, Debezium, and Trino, this opportunity in Bangalore awaits your expertise and contribution.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

maharashtra

On-site

As a Lead Software Engineer III at JPMorgan Chase within the AI/ML & Data Platforms team, you play a vital role in an agile team dedicated to enhancing, building, and delivering cutting-edge technology products with trust and reliability. Your responsibilities encompass executing innovative software solutions, designing and developing high-quality code, and troubleshooting technical issues with a forward-thinking mindset. Your expertise contributes to improving operational stability and automating remediation processes, ensuring the scalability and security of software applications. You will lead evaluation sessions with external vendors and internal teams, fostering discussions on architectural designs and technological applicability. As a core member of the Software Engineering community, you will champion the adoption of new technologies and drive a culture of diversity, equity, inclusion, and respect. **Job Responsibilities:** - Execute creative software solutions, design, development, and technical troubleshooting with a focus on innovative approaches. - Develop secure high-quality production code, review and debug code, and identify opportunities for automation to enhance operational stability. - Lead evaluation sessions with external vendors and internal teams to assess architectural designs and technical implementations. - Drive awareness and adoption of new technologies within the Software Engineering community. - Contribute to a culture of diversity, equity, inclusion, and respect. **Required Qualifications, Capabilities, and Skills:** - Formal training or certification in software engineering concepts with 3+ years of applied experience. - Hands-on experience in system design, application development, testing, and operational stability. - Proficiency in automated API and UI testing using technologies like RestAssured and Selenium. - Advanced knowledge of programming languages such as Java, Selenium, and Cucumber. - Experience with performance testing tools like JMeter and Blazemeter. - Proficiency in automation, continuous delivery, and the Software Development Life Cycle. - Understanding of agile methodologies, CI/CD, application resiliency, security, and technical processes in cloud, AI, and machine learning disciplines. **Preferred Qualifications, Capabilities, and Skills:** - Independently design, develop, test, and deliver Test Automation Solutions supporting UAT and UAT testing using Java, Selenium, and Cucumber. - Familiarity with REST and SOAP web services, API Testing/Automation. - Knowledge of databases like Trino, Iceberg, Snowflake, and Postgres. - Collaborative nature, ability to build strong relationships, strategize process improvements, and results-oriented mindset.,

Posted 1 week ago

Apply

6.0 - 8.0 years

0 Lacs

hyderabad, telangana, india

On-site

About the Role: Grade Level (for internal use): 10 The Team: Join the TeraHelix team within S&P Global&aposs Enterprise Data Organization (EDO). We are a dynamic group of highly skilled engineers dedicated to building innovative data solutions that empower businesses. Our team works collaboratively on foundational data products, leveraging cutting-edge technologies to solve real-world client challenges. The Impact: As part of the TeraHelix team, you will contribute to the development of our marquee AI-enabled data products, including TeraHelix&aposs GearBox, ETL Mapper and Data Studio solutions. Your work will directly impact our clients by enhancing their data capabilities and driving significant business value. What&aposs in it for you: Opportunity to work on a distributed, cloud-native, fully Java tech stack (Java 21+) with UI components built in the Vaadin framework. Engage in skill-building and innovation opportunities in a supportive environment. Collaborate with a diverse group of professionals across data, product, and technology disciplines. Contribute to projects that have a tangible impact on the organization and the industry. Key Responsibilities: Design, develop and maintain scalable and efficient data modelling components within a distributed data platform. Collaborate with cross-functional teams to gather requirements and translate them into technical specifications and solutions. Implement best practices in software development, including code reviews, unit testing and continuous integration / continuous deployment (CI/CD) processes. Troubleshoot and resolve software defects and performance issues in a timely manner. Participate sprint planning, daily stand-ups, user demos and retrospectives to ensure alignment and progress within the team. Mentor junior developers and contribute to their professional growth through knowledge sharing and code reviews. Stay updated with emerging technologies and industry trends to continuously improve our software solutions quality and performance. Document technical designs, processes and workflows to facilitate knowledge transfer and maintain project transparency. Engage with stakeholders to communicate project status, challenges and solutions, ensuring alignment with business outcomes. Contribute to the overall architecture and design of the TeraHelix ecosystem, ensuring scalability, reliability and security. What we&aposre looking for: Bachelor&aposs degree or higher in Computer Science or a related field. 6+ years of hands-on experience in software development, particularly with Java (21+ preferred) and associated toolchains. Proficiency in SQL (any variant) and big data technologies, with experience in operating commonly used databases such as PostgreSQL, HBase, or Trino. Knowledge of gRPC (unary, response streaming, bi-directional streaming, REST mapping). Familiarity with Linux operating systems, including command-line tools and utilities. Experience with version control systems such as Git, GitHub, Bitbucket or Azure DevOps. Knowledge of Object-Orientated Programming (OOP) design patterns, Test-Driven Development (TDD) and enterprise system design principles. Strong problem-solving and debugging skills. Commitment to software craftsmanship and Agile principles. Effective communication skills for technical concepts. Adaptability and eagerness to learn new technologies. Interest in emerging tools and frameworks. Nice to have: Experience with the Vaadin UI framework. Experience with Big data processing engines, Avro and Distributed streaming platform. Familiarity with DevOps practices and automation tools. Knowledge of Container orchestration systems. Cloud experience across AWS, Azure, GCP or Oracle Cloud. Experience with C# and .NET Core. Familiarity with Python, R, Ruby or JavaScript, especially in the GraalVM. Interest in financial markets and business development. What&aposs In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology-the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We&aposre more than 35,000 strong worldwide-so we&aposre able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We&aposre committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We&aposre constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world&aposs leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That&aposs why we provide everything you-and your career-need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It&aposs not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards-small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to [HIDDEN TEXT] . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, "pre-employment training" or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: [HIDDEN TEXT] and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority - Ratings - (Strategic Workforce Planning) Job ID: 315678 Posted On: 2025-08-27 Location: Gurgaon, Haryana, India Show more Show less

Posted 2 weeks ago

Apply

4.0 - 9.0 years

10 - 20 Lacs

chennai

Work from Office

JD: • Good experience in Apache Iceberg, Apache Spark, Trino • Proficiency in SQL and data modeling • Experience with open Data Lakehouse using Apache Iceberg • Experience with Data Lakehouse architecture with Apache Iceberg and Trino

Posted 2 weeks ago

Apply

4.0 - 9.0 years

10 - 20 Lacs

bengaluru

Work from Office

JD: • Good experience in Apache Iceberg, Apache Spark, Trino • Proficiency in SQL and data modeling • Experience with open Data Lakehouse using Apache Iceberg • Experience with Data Lakehouse architecture with Apache Iceberg and Trino

Posted 2 weeks ago

Apply

4.0 - 9.0 years

10 - 20 Lacs

hyderabad

Work from Office

JD: • Good experience in Apache Iceberg, Apache Spark, Trino • Proficiency in SQL and data modeling • Experience with open Data Lakehouse using Apache Iceberg • Experience with Data Lakehouse architecture with Apache Iceberg and Trino

Posted 2 weeks ago

Apply

12.0 - 16.0 years

0 Lacs

karnataka

On-site

As an Advisory Consultant at Dell Technologies, you will play a crucial role in delivering consultative business and technical services for complex customer-facing consulting engagements related to digital transformation. Your responsibilities will involve collaborating with Global Pre-Sales, Account Management, and Solutioning Teams to deploy, administer, and configure digital transformation software stacks. Being one of the senior technical members in the Digital Transformation Practice, you will earn customer trust through competence, technical acumen, consulting expertise, and partnership. You will guide and oversee other team members, providing technical grooming activities for their skill development. Your role will require expert customer-facing skills, leadership qualities, and the ability to communicate technical processes effectively. Your key responsibilities will include exploring customers" Data and Analytics opportunities, driving digital transformation within customer organizations, architecting unified Data Management strategies, and implementing end-to-end data engineering pipelines. Additionally, you will collaborate with various stakeholders to support deal closures and contribute to the growth of the practice. To excel in this role, you should have over 12 years of experience in the IT industry, preferably with a degree in computer science or engineering. You must possess a minimum of 5 years of hands-on experience with big data technologies like Hadoop and Spark, strong programming skills in languages such as Python, Java, or Scala, and proficiency in SQL and query optimization. Experience in developing cloud-based applications, working with different databases, and familiarity with message formats and distributed querying solutions will be essential for success. Desirable qualifications include experience with containerization technologies like Docker and Kubernetes, as well as engaging with pre-sales and sales teams to create solutions for customers seeking digital transformation and AI/Edge solutions. At Dell Technologies, we believe in the power of each team member to make a significant impact. If you are eager to grow your career with cutting-edge technology and join a diverse and innovative team, we invite you to be part of our journey to build a future that benefits everyone. Application closing date: 1 May 2025,

Posted 2 weeks ago

Apply

6.0 - 8.0 years

0 Lacs

india

On-site

About the Role: 10 The Team: Join the TeraHelix team within S&P Global's Enterprise Data Organization (EDO). We are a dynamic group of highly skilled engineers dedicated to building innovative data solutions that empower businesses. Our team works collaboratively on foundational data products, leveraging cutting-edge technologies to solve real-world client challenges. The Impact: As part of the TeraHelix team, you will contribute to the development of our marquee AI-enabled data products, including TeraHelix's GearBox, ETL Mapper and Data Studio solutions. Your work will directly impact our clients by enhancing their data capabilities and driving significant business value. What's in it for you: Opportunity to work on a distributed, cloud-native, fully Java tech stack (Java 21+) with UI components built in the Vaadin framework. Engage in skill-building and innovation opportunities in a supportive environment. Collaborate with a diverse group of professionals across data, product, and technology disciplines. Contribute to projects that have a tangible impact on the organization and the industry. Key Responsibilities: Design, develop and maintain scalable and efficient data modelling components within a distributed data platform. Collaborate with cross-functional teams to gather requirements and translate them into technical specifications and solutions. Implement best practices in software development, including code reviews, unit testing and continuous integration / continuous deployment (CI/CD) processes. Troubleshoot and resolve software defects and performance issues in a timely manner. Participate sprint planning, daily stand-ups, user demos and retrospectives to ensure alignment and progress within the team. Mentor junior developers and contribute to their professional growth through knowledge sharing and code reviews. Stay updated with emerging technologies and industry trends to continuously improve our software solutions quality and performance. Document technical designs, processes and workflows to facilitate knowledge transfer and maintain project transparency. Engage with stakeholders to communicate project status, challenges and solutions, ensuring alignment with business outcomes. Contribute to the overall architecture and design of the TeraHelix ecosystem, ensuring scalability, reliability and security. What we're looking for: Bachelor's degree or higher in Computer Science or a related field. 6+ years of hands-on experience in software development, particularly with Java (21+ preferred) and associated toolchains. Proficiency in SQL (any variant) and big data technologies, with experience in operating commonly used databases such as PostgreSQL, HBase, or Trino. Knowledge of gRPC (unary, response streaming, bi-directional streaming, REST mapping). Familiarity with Linux operating systems, including command-line tools and utilities. Experience with version control systems such as Git, GitHub, Bitbucket or Azure DevOps. Knowledge of Object-Orientated Programming (OOP) design patterns, Test-Driven Development (TDD) and enterprise system design principles. Strong problem-solving and debugging skills. Commitment to software craftsmanship and Agile principles. Effective communication skills for technical concepts. Adaptability and eagerness to learn new technologies. Interest in emerging tools and frameworks. Nice to have: Experience with the Vaadin UI framework. Experience with Big data processing engines, Avro and Distributed streaming platform. Familiarity with DevOps practices and automation tools. Knowledge of Container orchestration systems. Cloud experience across AWS, Azure, GCP or Oracle Cloud. Experience with C# and .NET Core. Familiarity with Python, R, Ruby or JavaScript, especially in the GraalVM. Interest in financial markets and business development. What's In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology-the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide-so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We're committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We're constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That's why we provide everything you-and your career-need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It's not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards-small perks can make a big difference. For more information on benefits by country visit: Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, pre-employment training or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - ----------------------------------------------------------- 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority - Ratings - (Strategic Workforce Planning)

Posted 2 weeks ago

Apply

0.0 years

0 Lacs

hyderabad, telangana, india

On-site

Job description Some careers shine brighter than others. If you're looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Sr.Associate Director, Software Engineering Provide the technical expertise for Risk Data Platform and the various software components that supplement it for its transformation & uplift. Implement standards around development, DevSecOps, orchestration, segregation & containerization Act as a technical expert on the design and implementation of the technology solutions to meet the needs of the Data & Enterprise reporting function on a tactical and strategic basis Accountable for ensuring compliance of the products and services with mandatory and regulatory requirements, control objectives in the risk and control framework and technical currency (in line with published standards and guidelines) and, with the architecture function, implementation of the business imperatives. The role holder must work with the IT communities of practice to maximize automation, increase efficiency and ensure that best practice, and the latest tools, techniques and processes have been adopted Requirements To be successful in this role, you should meet the following requirements: Must have experience in CI/CD - Ansible / Jenkins Must have experience in operating a container orchestration cluster (Kubernetes, Docker) Proficient knowledge of integration of Spark framework & Deltalake Must have knowledge on working on distribute compute platform like - Spark/Hadoop/Trino etc Must have experience in Python/Pyspark Must have knowledge on code review, code optimization and enforcing best in class coding standards Must have experience in multi-tenant application/platform Must have knowledge on access management, segregation of duties, and change management process DevSec Ops Preferred knowledge on Apache eco system like - Spark, Airflow Preferred Experience in any database (Postgres) Experience with UNIX & Spark UI Experience with zookeeper or similar orchestration. Experience in using CI/CD automation tools (Git, Jenkins, Configuration deployment tools ( Puppet/Chef/Ansible) Significant experience with Linux operating system environments. Proficient understanding of code versioning tools Git. Understanding of accessibility and security compliance. Knowledge of user authentication and authorization between multiple systems, servers, and environments Strong unit test , integration test and debugging skill Excellent problem-solving, Log Analysis and troubleshooting skills Experience with infrastructure scripting solutions such as Python/Shell scripting. Excellent problem-solving, Log Analysis and troubleshooting skills using SPLUNK & Acceldata Experience in scheduling tool - ControlM Experience in log monitoring tool - SPLUNK Experience in Vault - HashiCopr Expertise in Python Coding You'll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by - HSBC Software Development India

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

haryana

On-site

As a Solution Architect at our Gurgaon office, you will play a key role in leading the development and delivery of various platforms. Your responsibilities will include designing end-to-end solutions that meet business requirements, defining architecture blueprints, selecting appropriate technology stacks, collaborating with stakeholders, guiding development teams, ensuring compliance with best practices, and identifying opportunities for innovation and efficiency within solutions. You will also be responsible for facilitating daily meetings, sprint planning, and reviews, working closely with product owners, addressing project issues, and utilizing Agile project management tools. Moreover, you should have a Bachelor's degree in Computer Science or a related field, with at least 7 years of experience in software engineering and 3 years in a solution architecture or technical leadership role. Your proficiency in AWS or GCP cloud platforms, JS tech stack (NodeJS, ReactJS), MySQL, PostgreSQL, Redis, MongoDB, Kafka, Spark, Trino, Looker, PowerBI, BI tools, and data warehouses like BigQuery and Redshift will be essential. Experience with CI/CD pipelines, containerization (Docker/Kubernetes), and IaC (Terraform/CloudFormation) is also required. Additionally, having certifications such as AWS Certified Solutions Architect, Azure Solutions Architect Expert, or TOGAF, as well as experience with analytical pipelines, Python tools, data transformation tools, and data orchestration tools are considered advantageous. Join us at Affle, a global technology company that focuses on consumer engagement, acquisitions, and transactions through mobile advertising. Affle's platform offers contextual mobile ads to enhance marketing returns and reduce digital ad fraud. Learn more about us at www.affle.com. Explore Ultra, our BU that provides access to deals, coupons, and user acquisition services on a single platform for bottom-funnel optimization across various inventory sources. Visit https://www.ultraplatform.io/ for more information.,

Posted 2 weeks ago

Apply

5.0 - 8.0 years

0 Lacs

india

On-site

When 5% of Indian households shop with us, it's important to build data-backed, resilient systems to manage millions of orders every day. We've done this - with zero downtime! ???? Sounds impossible Well, that's the kind of Engineering muscle that has helped Meesho become the e-commerce giant that it is today. We value speed over perfection, and see failures as opportunities to become better. We've taken steps to inculcate a strong Founder's Mindset across our engineering teams, making us grow and move fast. We place special emphasis on the continuous growth of each team member - and we do this with regular 1-1s and open communication. Tech Culture We have a unique tech culture where engineers are seen as problem solvers. The engineering org is divided into multiple pods and each pod is aligned to a particular business theme. It is a culture driven by logical debates & arguments rather than authority. At Meesho, you get to solve hard technical problems at scale as well as have a significant impact on the lives of millions of entrepreneurs. You are expected to contribute to the Solutioning of product problems as well as challenge existing solutions. Meesho's user base has grown 4x in the last 1 year and we have more than 50 million downloads of our app. Here are a few projects we have completed last year to scale oursystems for this growth: . We have developed API gateway aggregators using frameworks like Hystrix and spring-cloud-gateway for circuit breaking and parallel processing. . Our serving microservices handle more than 15K RPS on normal days and during saledays this can go to 30K RPS. Being a consumer app, these systems have SLAs of 10ms . Our distributed scheduler tracks more than 50 million shipments periodically fromdifferent partners and does async processing involving RDBMS. . We use an in-house video streaming platform to support a wide variety of devices and networks. What You'll Do Design and implement scalable and fault-tolerant data pipelines (batch and streaming) using frameworks like Apache Spark , Flink , and Kafka . Lead the design and development of data platforms and reusable frameworks that serve multiple teams and use cases. Build and optimize data models and schemas to support large-scale operational and analytical workloads. Deeply understand Apache Spark internals and be capable of modifying or extending the open-source Spark codebase as needed. Develop streaming solutions using tools like Apache Flink , Spark Structured Streaming . Drive initiatives that abstract infrastructure complexity , enabling ML, analytics, and product teams to build faster on the platform. Champion a platform-building mindset focused on reusability , extensibility , and developer self-service . Ensure data quality, consistency, and governance through validation frameworks, observability tooling, and access controls. Optimize infrastructure for cost, latency, performance , and scalability in modern cloud-native environments . Mentor and guide junior engineers , contribute to architecture reviews, and uphold high engineering standards. Collaborate cross-functionally with product, ML, and data teams to align technical solutions with business needs. What We're Looking For 5-8 years of professional experience in software/data engineering with a focus on distributed data systems . Strong programming skills in Java , Scala , or Python , and expertise in SQL . At least 2 years of hands-on experience with big data systems including Apache Kafka , Apache Spark/EMR/Dataproc , Hive , Delta Lake , Presto/Trino , Airflow , and data lineage tools (e.g., Datahb,Marquez, OpenLineage). Experience implementing and tuning Spark/Delta Lake/Presto at terabyte-scale or beyond. Strong understanding of Apache Spark internals (Catalyst, Tungsten, shuffle, etc.) with experience customizing or contributing to open-source code. Familiarity and worked with modern open-source and cloud-native data stack components such as: Apache Iceberg , Hudi , or Delta Lake Trino/Presto , DuckDB , or ClickHouse,Pinot ,Druid Airflow , Dagster , or Prefect DBT , Great Expectations , DataHub , or OpenMetadata Kubernetes , Terraform , Docker Strong analytical and problem-solving skills , with the ability to debug complex issues in large-scale systems. Exposure to data security, privacy, observability , and compliance frameworks is a plus. Good to Have Contributions to open-source projects in the big data ecosystem (e.g., Spark, Kafka, Hive, Airflow) Hands-on data modeling experience and exposure to end-to-end data pipeline development Familiarity with OLAP data cubes and BI/reporting tools such as Tableau, Power BI, Superset, or Looker Working knowledge of tools and technologies like ELK Stack (Elasticsearch, Logstash, Kibana) , Redis , and MySQL Exposure to backend technologies including RxJava , Spring Boot , and Microservices architecture About us Welcome to Meesho, where every story begins with a spark of inspiration and a dash of entrepreneurial spirit. We're not just a platform we're your partner in turning dreams into realities. Curious about life at Meesho Explore our Glassdoor - our people have a lot to say and they've helped us become a loved workplace in India. Our Mission Democratising internet commerce for everyone - Meesho (Meri Shop) started with a single idea in mind: to be an e-commerce destination for Indian consumers and to enable small businesses to succeed online. We provide our sellers with benefits such as zero commission and affordable shipping solutions in the market. Today, sellers nationwide are growing their businesses by tapping into Meesho's large and diverse customer base, state-of-the-art tech infrastructure, and pan-India logistics network through trusted third-party partners. Affordable, relatable merchandise that mirrors local markets has helped us connect with internet users and serve customers across urban, semi-urban, and rural India. Our unique business model and continuous innovation have established us as a part of India's e-commerce ecosystem. Culture and Total Rewards Our focus is on cultivating a dynamic workplace characterized by high impact and performance excellence. We prioritize a people-centric culture, dedicated to hiring and developing exceptional talent. Total rewards at Meesho comprise a comprehensive set of elements - monetary, non-monetary, tangible, and intangible. Our 9 guiding principles, or Mantras, are the backbone of how we operate, influencing everything from recognition and evaluation to growth discussions. Daily rituals and processes like Problem First Mindset, Listen or Die, our Internal Mobility Program, Talent Reviews, and Continuous Performance Management embody these principles. We offer competitive compensation - both cash and equity-based - tailored to job roles, individual experience, and skill, along with employee-centric benefits and a supportive work environment. Our holistic wellness program, MeeCare, includes benefits across physical, mental, financial, and social wellness. This includes extensive medical insurance for employees and their families, wellness initiatives like telehealth, wellness events, and fitness-related perks. To support work-life balance, we offer generous leave policies, parental support, retirement benefits, and learning and development assistance. Through personalized recognition, gratitude for stretched work, and engaging activities, we promote employee delight at the workplace. Additional benefits such as salary advance support, relocation assistance, and flexible benefit plans further enrich the Meesho experience. Know more about Meesho here :

Posted 2 weeks ago

Apply

3.0 - 8.0 years

20 - 35 Lacs

hyderabad, pune, bengaluru

Work from Office

Required Skills & Experience: Primary Technologies: Python, JavaScript, Apache Iceberg, Apache Spark Benchmarking : TPC-H, TPC-DS benchmark implementation and analysis Performance Tools : Trino, PyIceberg, Data Fusion, Performance optimization Testing Frameworks : Performance testing tools, Load testing, Bottleneck analysis Infrastructure : Kubernetes, distributed systems performance Experience : 4+yrs in Performance engineering, Bigdata, Query optimization

Posted 3 weeks ago

Apply

5.0 - 7.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Hiring for Data QA Engineer in HCLTech Primary skill- Python, Pyspark, Postgres, Data Pipeline QA Location- Bangalore, Noida, Chennai, Hyderabad, Pune Notice Period- Only Immediate joiners Experience- 5 to 7 Years Interested candidates share your resume to [HIDDEN TEXT] Develop and execute test strategies focused on data pipelines, including ingestion, transformation, storage, and retrieval. Perform data integrity validation by verifying schema consistency, data completeness, and correctness across distributed systems. Conduct Change Data Capture (CDC) testing to ensure seamless data updates and synchronization. Optimize and validate queries for performance and correctness using SQL, Trino, and Amazon Redshift. Implement automated data validation and regression tests to monitor for anomalies, data drift, and pipeline failures. Perform load testing and stress testing for large-scale data processing workflows. Identify bottlenecks and performance issues in ETL/ELT workflows and work with data engineers to optimize them. Ensure compliance with data governance standards, including security, retention, and access controls. Collaborate with engineers and reliability teams to define best practices for data quality and testing automation. Document test cases, defects, and resolutions, providing insights for continuous improvements. Show more Show less

Posted 3 weeks ago

Apply

0.0 years

0 Lacs

chennai, tamil nadu, india

On-site

Job description Role Overview We are seeking an experienced Data Engineer with strong expertise in SQL, Python, PySpark, Airflow, Trino, and Hive to design, develop, and optimize our data pipelines. The role involves working with large flat file datasets, orchestrating workflows using Airflow, performing data transformations using Spark, and loading the final layers into Snowflake for analytics and reporting. Key Responsibilities Data Pipeline Development: Build, maintain, and optimize data pipelines using Airflow for orchestration and scheduling. Data Ingestion & Transformation: Work with flat files (CSV, JSON, Mainframe-based files) and ensure accurate ingestion and transformation. Spark-based Processing: Use PySpark for large-scale data processing and implementing custom UDF (User Defined Functions) where needed. SQL Development: Create, optimize, and maintain SQL scripts for data manipulation, reporting, and analytics. Snowflake Data Integration: Load and manage final processed data layers in Snowflake . Data Quality & Metrics: Implement checks for file size limits, data consistency, and daily metric tracking. Collaboration & Requirements Gathering: Work with business and technical teams to understand requirements and deliver efficient data solutions. Required Skills & Experience Proficiency in SQL (query optimization, joins, indexing). Strong Python programming skills, including writing reusable functions. Hands-on experience with PySpark (adding columns, transformations, cache usage, UDF functions). Proficiency in Airflow for workflow orchestration. Familiarity with Trino and Hive query engines. Experience with flat file formats (CSV, JSON, Mainframe-based files) and data parsing strategies. Understanding of data normalization , unique constraints, and caching strategies. Experience working with Snowflake or other cloud data warehouses. Preferred Qualifications Knowledge of performance tuning in Spark and SQL. Understanding of data governance and security best practices. Experience with large file processing (including max size handling). Show more Show less

Posted 4 weeks ago

Apply

12.0 - 15.0 years

12 - 15 Lacs

Bengaluru, Karnataka, India

On-site

Visa is seeking a Lead Data Engineer in the Data Platform department to act as one of key technology leaders to build and manage Visa's technology assets in the Platform as a Service organization. As a Lead Data Engineer, you will work on building next generation hybrid data platform based on open source big data technologies. You will be responsible to define the architecture and enhance data platform of Visa that handles 100s of PBs of data and contribute towards technical roadmap of the evolving platform. You will have the opportunity to lead, guide, and mentor senior technical staff in the team on the design and development of new features. You will closely work with the open source community of various big data projects including but not limited to Trino, Spark, Hive, Pinot, HBase, Iceberg, etc. This position will be based in Bangalore, KA and reporting to Sr. Director of Engineering. This is a hybrid position. Expectation of days in office will be confirmed by your hiring manager. Qualifications Basic Qualifications Bachelor's degree in Computer Science, or related technical discipline. With 12+ years of software development experience in building large scale data processing platforms. Preferred Qualification Experience in building hybrid data platforms (on-prem & cloud) based on open source technologies like Trino/Presto, Hive, Spark, Tez, HBase, Iceberg, Doris, Pinot, etc Proficiency in Platform and product architecture, engineering practices, design patterns and writing high quality code with expertise core Java and Python Experience in build Big Data platforms using Open-Source Technologies Contributions to one or more Big Data Open Source Technologies of Hadoop ecosystem like Trino/Hive/Spark/Tez/Iceberg/Hudi/HBase etc will be an added plus. Committer/PMC position in any of these open source projects will be an added advantage Experience working on any of the cloud native data services. Hands on experience working on data services/solutions/applications on any of the the public clouds AWS/Azure/GCP

Posted 1 month ago

Apply

7.0 - 12.0 years

7 - 12 Lacs

Bengaluru, Karnataka, India

On-site

Visa is seeking a Senior Data Engineer in the Data Platform department to act as one of key technology leaders to build and manage Visa's technology assets in the Platform as a Service organization. As a Staff Data Engineer, you will work on Open source Platforms Enhancements. You will have the opportunity to lead, participate, guide, and mentor other engineers in the team on design and development. This position will be based in Bangalore, KA and reporting to Director of Engineering. This is a hybrid position. Expectation of days in office will be confirmed by your Hiring Manager. Qualifications 7 or more years of work experience with a Bachelors Degree or an Advanced Degree (e.g. Masters, MBA, JD, MD) or up to 3 years of relevant experience with a PhD Bachelor's degree in Computer Science, or related technical discipline. With 4+ years of software development experience in building large scale data processing platforms. Proficiency in engineering practices and writing high quality code, with expertise in either one of Java or Python Experience in building platforms on top of Big Data open source platforms like Trino/Presto/Hive/Spark/Tez. Contributions to one or more Big Data Open Source Technologies like Trino/Hive/Spark/Tez etc will be an added plus. Experience in building Cloud Data Technologies. Experience in Building platforms on top of Cloud Data Stacks.

Posted 1 month ago

Apply

10.0 - 14.0 years

0 Lacs

karnataka

On-site

As a software developer at Salesforce, you will have the opportunity to contribute lines of code that have a significant and measurable positive impact on users, the company's bottom line, and the industry. Working alongside a team of world-class engineers, you will play a key role in building breakthrough features that our customers will love, adopt, and use, all while ensuring the stability and scalability of our trusted CRM platform. Your responsibilities will include architecture, design, implementation, and testing to ensure that our products are built correctly and released with high quality. You will also have the opportunity to engage in code review, mentor junior engineers, and provide technical guidance to the team, depending on your seniority level. At Salesforce, we take pride in writing high-quality, maintainable code that enhances product stability and streamlines our processes. As a Lead Engineer, you will be tasked with building new and innovative components in a rapidly evolving market technology landscape to enhance scale and efficiency. Your role will involve developing high-quality, production-ready code that will be utilized by millions of users, making design decisions based on performance, scalability, and future expansion, and contributing to all phases of the software development life cycle. Additionally, you will work within a Hybrid Engineering model and collaborate on building efficient components in a microservices multi-tenant SaaS cloud environment. To excel in this role, you should possess mastery of multiple programming languages and platforms, have at least 10 years of software development experience, demonstrate proficiency in object-oriented programming and scripting languages such as Java, Python, Scala, C#, Go, Node.JS, and C++, and exhibit strong SQL skills with experience in relational and non-relational databases. Experience with developing SAAS applications on public cloud infrastructure such as AWS, Azure, and GCP, as well as competency in queues, locks, scheduling, event-driven architecture, workload distribution, and software development best practices, are also essential requirements. Salesforce offers a comprehensive benefits package, including well-being reimbursement, generous parental leave, adoption assistance, fertility benefits, and more. You will have access to world-class enablement and on-demand training through Trailhead.com, exposure to executive thought leaders, and regular coaching sessions with leadership. Additionally, you will have the opportunity to participate in volunteer activities and contribute to the community through Salesforce's 1:1:1 model. For further information regarding benefits and perks, please visit https://www.salesforcebenefits.com/,

Posted 1 month ago

Apply

10.0 - 14.0 years

0 Lacs

pune, maharashtra

On-site

About the Team As a part of the DoorDash organization, you will be joining a data-driven team that values timely, accurate, and reliable data to make informed business and product decisions. Data serves as the foundation of DoorDash's success, and the Data Engineering team is responsible for building database solutions tailored to various use cases such as reporting, product analytics, marketing optimization, and financial reporting. By implementing robust data structures and data warehouse architecture, this team plays a crucial role in facilitating decision-making processes at DoorDash. Additionally, the team focuses on enhancing the developer experience by developing tools that support the organization's high-velocity demands. About the Role DoorDash is seeking a dedicated Data Engineering Manager to lead the development of enterprise-scale data solutions. In this role, you will serve as a technical expert on all aspects of data architecture, empowering data engineers, data scientists, and DoorDash partners. Your responsibilities will include fostering a culture of engineering excellence, enabling engineers to deliver reliable and flexible solutions at scale. Furthermore, you will be instrumental in building and nurturing a high-performing team, driving innovation and success in a dynamic and fast-paced environment. In this role, you will: - Lead and manage a team of data engineers, focusing on hiring, building, growing, and nurturing impactful business-focused data teams. - Drive the technical and strategic vision for embedded pods and foundational enablers to meet current and future scalability and interoperability needs. - Strive for continuous improvement of data architecture and development processes. - Balance quick wins with long-term strategy and engineering excellence, breaking down large systems into user-friendly data assets and reusable components. - Collaborate cross-functionally with stakeholders, external partners, and peer data leaders. - Utilize effective planning and execution tools to ensure short-term and long-term team and stakeholder success. - Prioritize reliability and quality as essential components of data solutions. Qualifications: - Bachelor's, Master's, or Ph.D. in Computer Science or equivalent field. - Over 10 years of experience in data engineering, data platform, or related domains. - Minimum of 2 years of hands-on management experience. - Strong communication and leadership skills, with a track record of hiring and growing teams in a fast-paced environment. - Proficiency in programming languages such as Python, Kotlin, and SQL. - Prior experience with technologies like Snowflake, Databricks, Spark, Trino, and Pinot. - Familiarity with the AWS ecosystem and large-scale batch/real-time ETL orchestration using tools like Airflow, Kafka, and Spark Streaming. - Knowledge of data lake file formats including Delta Lake, Apache Iceberg, Glue Catalog, and S3. - Proficiency in system design and experience with AI solutions in the data space. At DoorDash, we are dedicated to fostering a diverse and inclusive community within our company and beyond. We believe that innovation thrives in an environment where individuals from diverse backgrounds, experiences, and perspectives come together. We are committed to providing equal opportunities for all and creating an inclusive workplace where everyone can excel and contribute to our collective success.,

Posted 1 month ago

Apply
Page 1 of 2
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies