Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
12.0 - 18.0 years
0 Lacs
karnataka
On-site
Role Overview: You will be responsible for leading and managing large-scale data integration initiatives across multiple cloud platforms as a Senior Data Integration Engineer. Your expertise in designing and implementing modern data pipelines, integrating heterogeneous data sources, and enabling scalable data storage solutions will be crucial. This role will involve working with Azure, AWS, GCP, cloud storage systems, and Apache Iceberg. Key Responsibilities: - Design, build, and manage data integration solutions across hybrid and multi-cloud environments. - Develop and optimize data pipelines to ensure scalability, performance, and reliability. - Implement and manage cloud storage solutions such as Cloud Object Store. - Leverage Apache Iceberg for scalable data lakehouse implementations. - Collaborate with data architects, analysts, and engineering teams to ensure seamless integration of structured and unstructured data sources. - Ensure data security, governance, and compliance across platforms. - Provide technical leadership, mentoring, and guidance to junior engineers. - Stay updated with emerging trends in data engineering, cloud technologies, and data integration frameworks. Qualifications Required: - 12-18 years of strong experience in data engineering and integration. - Hands-on expertise with Azure, AWS, and GCP cloud platforms. - Proficiency in cloud storage technologies such as Cloud Object Store. - Deep experience with Apache Iceberg for modern data lake and lakehouse implementations. - Strong knowledge of ETL/ELT pipelines, data ingestion, and transformation frameworks. - Proven track record in designing and deploying large-scale, enterprise-grade data integration solutions. - Excellent problem-solving, analytical, and troubleshooting skills. - Strong communication and leadership abilities, with experience in mentoring teams. - Bachelors or Masters degree in Computer Science, Data Engineering, or a related field.,
Posted 16 hours ago
5.0 - 9.0 years
0 Lacs
kochi, kerala
On-site
You are being sought after to take on the role of a Senior Data Engineer, where your primary responsibility will be leading the development of a scalable data ingestion framework. In this capacity, you will play a crucial role in ensuring high data quality and validation, as well as designing and implementing robust APIs for seamless data integration. Your expertise in building and managing big data pipelines using modern AWS-based technologies will be put to good use, making sure that quality and efficiency are at the forefront of data processing systems. Key Responsibilities: - **Data Ingestion Framework**: - Design & Development: You will architect, develop, and maintain an end-to-end data ingestion framework that efficiently extracts, transforms, and loads data from diverse sources. - Framework Optimization: Utilize AWS services such as AWS Glue, Lambda, EMR, ECS, EC2, and Step Functions to build highly scalable, resilient, and automated data pipelines. - **Data Quality & Validation**: - Validation Processes: Develop and implement automated data quality checks, validation routines, and error-handling mechanisms to ensure the accuracy and integrity of incoming data. - Monitoring & Reporting: Establish comprehensive monitoring, logging, and alerting systems to proactively identify and resolve data quality issues. - **API Development**: - Design & Implementation: Architect and develop secure, high-performance APIs to enable seamless integration of data services with external applications and internal systems. - Documentation & Best Practices: Create thorough API documentation and establish standards for API security, versioning, and performance optimization. - **Collaboration & Agile Practices**: - Cross-Functional Communication: Work closely with business stakeholders, data scientists, and operations teams to understand requirements and translate them into technical solutions. - Agile Development: Participate in sprint planning, code reviews, and agile ceremonies, while contributing to continuous improvement initiatives and CI/CD pipeline development. Required Qualifications: - **Experience & Technical Skills**: - Professional Background: At least 5 years of relevant experience in data engineering with a strong emphasis on analytical platform development. - Programming Skills: Proficiency in Python and/or PySpark, SQL for developing ETL processes and handling large-scale data manipulation. - AWS Expertise: Extensive experience using AWS services including AWS Glue, Lambda, Step Functions, and S3 to build and manage data ingestion frameworks. - Data Platforms: Familiarity with big data systems (e.g., AWS EMR, Apache Spark, Apache Iceberg) and databases like DynamoDB, Aurora, Postgres, or Redshift. - API Development: Proven experience in designing and implementing RESTful APIs and integrating them with external and internal systems. - CI/CD & Agile: Hands-on experience with CI/CD pipelines (preferably with GitLab) and Agile development methodologies. - **Soft Skills**: - Strong problem-solving abilities and attention to detail. - Excellent communication and interpersonal skills with the ability to work independently and collaboratively. - Capacity to quickly learn and adapt to new technologies and evolving business requirements. Preferred Qualifications: - Bachelors or Masters degree in Computer Science, Data Engineering, or a related field. - Experience with additional AWS services such as Kinesis, Firehose, and SQS. - Familiarity with data lakehouse architectures and modern data quality frameworks. - Prior experience in a role that required proactive data quality management and API-driven integrations in complex, multi-cluster environments. Please note that the job is based in Kochi and Thiruvananthapuram, and only local candidates are eligible to apply. This is a full-time position that requires in-person work. Experience Required: - AWS: 7 years - Python: 7 years - PySpark: 7 years - ETL: 7 years - CI/CD: 7 years Location: Kochi, Kerala,
Posted 4 days ago
2.0 - 6.0 years
0 Lacs
chennai, tamil nadu
On-site
We are looking for a highly motivated and enthusiastic Intermediate Software Developer to join our growing engineering team. This role is suitable for an individual with 3-5 years of experience who is eager to learn and excel within a fast-paced environment. You will have the opportunity to work on exciting projects involving large-scale data processing, analytics, and software development, utilizing technologies such as Java, Apache Spark, Python, and Apache Iceberg. This position provides a unique chance to gain hands-on experience with cutting-edge data lake technologies and play a vital role in enhancing critical data infrastructure. As an Intermediate Software Developer, your responsibilities will include collaborating with senior developers and data engineers to design, develop, test, and deploy scalable data processing pipelines and applications. You will be required to write clean, efficient, and well-documented code in Java and Python for various data ingestion, transformation, and analysis tasks. Additionally, you will use Apache Spark for distributed data processing, focusing on performance optimization and resource management. Working with Apache Iceberg tables to manage large, evolving datasets in our data lake will be a key aspect of your role, ensuring data consistency and reliability. You will also assist in troubleshooting, debugging, and resolving issues in existing data pipelines and applications. Participation in code reviews to maintain a high standard of code quality and best practices, along with learning and adapting to new technologies and methodologies as project requirements evolve, are also part of the job. Contribution to the documentation of technical designs, processes, and operational procedures is expected from you. To qualify for this role, you should have 2-5 years of relevant experience and hold a Bachelor's degree in Computer Science, Software Engineering, Data Science, or a related technical field. Strong foundational knowledge of object-oriented programming principles, proficiency in Java or Python, and a basic understanding of data structures, algorithms, and software development lifecycles are essential. Familiarity with version control systems like Git, eagerness to learn, and a strong passion for software development and data technologies are also required. Excellent problem-solving skills, attention to detail, good communication, and teamwork abilities are key qualifications for this position. Education requirements include a Bachelor's degree or equivalent experience. Familiarity with distributed computing concepts, understanding of Apache Spark or experience with data processing frameworks, exposure to cloud platforms like AWS, Azure, or GCP, knowledge of SQL and database concepts, as well as any experience or coursework related to data lakes, data warehousing, or Apache Iceberg would be advantageous. Please note that this job description provides an overview of the work performed, and additional job-related duties may be assigned as required.,
Posted 1 week ago
0.0 years
0 Lacs
bengaluru, karnataka, india
On-site
About Us At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the worlds largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies. Cloudflare protects and accelerates any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks. Cloudflare was named to Entrepreneur Magazines Top Company Cultures list and ranked among the Worlds Most Innovative Companies by Fast Company. We realize people do not fit into neat boxes. We are looking for curious and empathetic individuals who are committed to developing themselves and learning new skills, and we are ready to help you do that. We cannot complete our mission without building a diverse and inclusive team. We hire the best people based on an evaluation of their potential and support them throughout their time at Cloudflare. Come join us! Available Locations: Bengaluru About The Department The Growth Engineering team is responsible for building world-class experiences that help the millions of Cloudflare self-service customers get what they need faster, from acquisition and onboarding all the way through to adoption and scale up. Our team is focused on high velocity experimentation and thoughtful optimizations to that experience on Cloudflares properties. This team has a dual mandate, also focusing on evolving our current marketing attribution, customer event ingress and experimentation capabilities that process billions of events across those properties to drive data-driven decision making. As an engineer for the team responsible for Data Capture and Experimentation, your job will be to deliver on those growth-driven features and experiences while evolving our current marketing attribution, consumer event ingress and experimentation setup across these experiences, and partner with many teams on implementations. About The Role We are looking for experienced full-stack engineers to join the Experimentation and Data Capture team. The ideal candidate will have experience working with large-scale applications, familiarity with event-driven data capture, and strong understanding of system design. You must care deeply not only about the quality of your and the team&aposs code, but also the customer experience and developer experience. We have a great opportunity to evolve our current data capture and experimentation systems to better serve our customers. We are also strong believers in dog-fooding our own products. From cache configuration to Cloudflare Access, Cloudflare Workers, and Zaraz, these are all tools in our engineer&aposs tool belt, so it is a plus if you have been a customer of ours, even as a free user. What Youll Do The Experimentation and Data Capture Engineering Team will be responsible for the following: Technical delivery for Experimentation and Data Capture capabilities intended for all of our customer-facing UI properties, driving user acquisition, engagement, and retention through data-driven strategies and technical implementations Collaborate with product, design and stakeholders to establish outcome measurements, roadmaps and key deliverables Own and lead execution of engineering projects in the area of web data acquisition and experimentation Work across the entire product lifecycle from conceptualization through production Build features end-to-end: front-end, back-end, IaC, system design, debugging and testing, engaging with feature teams and data processing teams Inspire and mentor less experienced engineers Work closely with the trust and safety team to handle any compliance or data privacy-related matters Examples Of Desirable Skills, Knowledge And Experience Comfort with building reusable SDKs and UI components with TypeScript/JavaScript required, comfort/familiarity with other languages (Go/Rust/Python) a plus. Experience building with high-scale serverless systems like Cloudflare Workers, AWS Lambda, Azure Functions, etc. Design and execute A/B tests and experiments to optimize for business KPIs, including user onboarding, feature adoption, and overall product experience. Create reusable components for other developers to leverage. Experience with publishing-to and querying-from data lake/warehouse products like Clickhouse, Apache Iceberg, to evaluate experiments. Familiarity with commercial analytics systems (Adobe Analytics, Google BigQuery, etc) a plus. Implement tracking and attribution systems to understand user behavior and measure the effectiveness of growth initiatives. Familiarity with event driven architectures, high-scale data processing, issues that can occur and how to protect against them. Familiarity with global data privacy requirements governed by laws like GDPR/CCPA/etc, and the implications for data capture, modeling, and analysis. Desire to work in a very fast-paced environment. What Makes Cloudflare Special Were not just a highly ambitious, large-scale technology company. Were a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet. Project Galileo : Since 2014, we&aposve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflares enterprise customers--at no cost. Athenian Project : In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&aposve provided services to more than 425 local government election websites in 33 states. 1.1.1.1 : We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released. Heres the deal - we dont store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers. Sound like something youd like to be a part of Wed love to hear from you! This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license. Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&aposs, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at [HIDDEN TEXT] or via mail at 101 Townsend St. San Francisco, CA 94107. Show more Show less
Posted 1 week ago
8.0 - 10.0 years
0 Lacs
gurgaon, haryana, india
On-site
Expedia Group brands power global travel for everyone, everywhere. We design cutting-edge tech to make travel smoother and more memorable, and we create groundbreaking solutions for our partners. Our diverse, vibrant, and welcoming community is essential in driving our success. Why Join Us To shape the future of travel, people must come first. Guided by our Values and Leadership Agreements, we foster an open culture where everyone belongs, differences are celebrated and know that when one of us wins, we all win. We provide a full benefits package, including exciting travel perks, generous time-off, parental leave, a flexible work model (with some pretty cool offices), and career development resources, all to fuel our employees' passion for travel and ensure a rewarding career journey. Were building a more open world. Join us. Introduction to team Our Corporate Functions are made up of teams that support Expedia Group, including Employee Communications, Finance, Traveler and Partner Service Platform, Legal, People Team, Inclusion and Diversity, and Global Social Impact and Sustainability The Global Financial Technology organization is a strategic partner for all finance functions and a service delivery organization that ensures finance initiatives/dependencies are completed on time and within budget. As a part of the Financial Planning and Reporting team you will get to work on engineering financial data that forms the backbone of our core FP&A processes for Expedia Group. In This Role, You Will Lead a team of Data Engineers to design and build robust, scalable data solutions, datasets, and data platforms. Translate business requirements into efficient, well-supported data solutions that seamlessly integrate with the overall system architecture, enabling customers and stakeholders to effectively address customer needs through data-driven insights. Participate in the full development cycle, end-to-end, from design, implementation, and testing to documentation, delivery and maintenance. Produce comprehensive, user-friendly architectures for Data Warehouse solutions that seamlessly integrate with the organization&aposs broader data ecosystem. Design, create, manage, and utilize large-scale data sets. Accountable for leading the design and delivery of scalable ETL (Extract, Transform, Load) processes within the data lake platform. Own Roadmap development, partner with product managers on roadmap, capacity planning, and feature rollout. Drive Business alignment, tie technical work to financial outcomes (cost savings, efficiency, accuracy). Technical Expertise: Strong in multiple programming languages and data technologies; responsible for architectural decisions, system design, and domain ownership. Operational Excellence: Advocates for and implements best practices in testing, monitoring, alerting, data validation, and performance optimization. Strategic & Business Alignment: Partners with product and business teams to align technology initiatives with business outcomes, including cost optimization and roadmap planning. Cross-Team Collaboration & Influence: Works across teams and senior leadership to drive process improvements, share domain knowledge, and influence technical direction. Mentorship & Talent Development: Develops team culture, supports professional growth, and builds a diverse talent pipeline through coaching and structured feedback. Domain Knowledge & Innovation: Applies deep industry knowledge to improve systems, recommend frameworks, and stay ahead of trends in data engineering and cloud technologies. Define team goals and align them with business outcomes. Act as a bridge between technical and non-technical stakeholders, ensuring clarity. Drive continuous improvement by anticipating bottlenecks and removing blockers. Experience And Qualifications 8+ years (Bachelors) or 6+ years (Masters) in data engineering. Strong in multiple technologies, data platforms, and cloud services. Proven experience in mentoring, project leadership, and team enablement. Skilled in data modeling, streaming, validation, and performance tuning. Excellent communication, documentation, and stakeholder management. Experience managing distributed teams and large-scale projects. Passionate about building diverse, high-performing teams and culture. Experience with resource allocation, capacity planning, and balancing FTE vs. contingent staff. Skilled at evaluating and addressing team skill gaps. Leadership & People Management: 3+ years managing teams of 6-10+ data engineers, including hiring, performance management, and mentoring across multiple global locations. Project & Program Delivery: Led 3+ multi-quarter data engineering projects, overseeing execution, cross-functional collaboration, and alignment with product roadmaps. Technology: Must have experience with Apache Spark, Java/Scala, Hive/RDBMS, Code deployments on AWS/Kubernetes, Git version control, SQL. Good to have experience with Airflow, CI/CD, Apache Iceberg, Kubernetes/Docker, Apache Kafka/Flink Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach out to our Recruiting Accommodations Team through the Accommodation Request. We are proud to be named as a Best Place to Work on Glassdoor in 2024 and be recognized for award-winning culture by organizations like Forbes, TIME, Disability:IN, and others. Expedia Group&aposs family of brands includes: Brand Expedia, Hotels.com, Expedia Partner Solutions, Vrbo, trivago, Orbitz, Travelocity, Hotwire, Wotif, ebookers, CheapTickets, Expedia Group Media Solutions, Expedia Local Expert, CarRentals.com, and Expedia Cruises. 2024 Expedia, Inc. All rights reserved. Trademarks and logos are the property of their respective owners. CST: 2029030-50 Employment opportunities and job offers at Expedia Group will always come from Expedia Groups Talent Acquisition and hiring teams. Never provide sensitive, personal information to someone unless youre confident who the recipient is. Expedia Group does not extend job offers via email or any other messaging tools to individuals with whom we have not made prior contact. Our email domain is @expediagroup.com. The official website to find and apply for job openings at Expedia Group is careers.expediagroup.com/jobs. Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age. Show more Show less
Posted 1 week ago
10.0 - 15.0 years
0 Lacs
karnataka
On-site
As a skilled engineer, you will play a crucial role in designing robust and scalable data platforms and microservices. Your primary responsibility will be ensuring that architectural and coding best practices are adhered to in order to deliver secure, reliable, and maintainable systems. You will actively engage in debugging complex system issues and work collaboratively with cross-functional teams to define project scope, goals, and deliverables. In this role, you will be tasked with managing priorities, effectively allocating resources, and ensuring the timely delivery of projects. Additionally, you will have the opportunity to lead, mentor, and cultivate a team of 10-15 engineers, fostering a collaborative and high-performance culture within the organization. Conducting regular one-on-one sessions, providing career development support, and managing performance evaluations will also be part of your responsibilities. Driving innovation is a key aspect of this position, where you will be expected to identify new technologies and methodologies to enhance systems and processes. You will be responsible for defining and tracking Key Performance Indicators (KPIs) to measure engineering efficiency, system performance, and team productivity. Collaborating closely with Product Managers, Data Scientists, Customer Success engineers, and other stakeholders to align efforts with business goals will also be essential. In addition to the above responsibilities, you will partner with other engineering teams to deliver cross-cutting features, contributing to the overall success of the organization. Requirements: - Possess 10-15 years of relevant experience. - Demonstrate excellent leadership, communication, and interpersonal skills to effectively manage a diverse and geographically distributed team. - Have hands-on experience in building and scaling data platforms and microservices-based products. - Proficient in programming languages commonly used for backend and data engineering such as Java, Python, and Go. - Operational experience with tools like Kafka/Kafka Streams, Spark, Databricks, Apache Iceberg, Apache Druid, and Clickhouse. - Familiarity with relational and NoSQL databases. If you are looking for a challenging yet rewarding opportunity to contribute to the growth and success of Traceable AI, this position could be the perfect fit for you.,
Posted 1 week ago
10.0 - 15.0 years
0 Lacs
karnataka
On-site
As a Data Platform Engineering Manager at Traceable AI, you will be responsible for designing robust and scalable data platforms and microservices. It will be your duty to ensure that architectural and coding best practices are followed to deliver secure, reliable, and maintainable systems. You will actively participate in debugging complex system issues and work with cross-functional teams to define project scope, goals, and deliverables. Managing priorities, allocating resources effectively, and ensuring timely project delivery will be crucial aspects of your role. In this position, you will lead, mentor, and grow a team of 10-15 engineers, fostering a collaborative and high-performance culture. Conducting regular one-on-ones, providing career development support, and managing performance evaluations will be part of your responsibilities. You will be driving innovation by identifying new technologies and methodologies to improve systems and processes. Defining and tracing KPIs to measure engineering efficiency, system performance, and team productivity will also fall under your purview. Collaboration with Product Managers, Data Scientists, Customer Success engineers, and other stakeholders to define requirements and align efforts with business goals will be essential. Additionally, partnering with fellow engineering teams to deliver cross-cutting features will be a key aspect of this role. To be successful in this role, you should have 10-15 years of experience and excellent leadership, communication, and interpersonal skills. You must possess the ability to manage a diverse, geographically distributed team. Hands-on experience in building and scaling data platforms and microservices-based products is required. Proficiency in programming languages commonly used for backend and data engineering such as Java, Python, and Go is essential. Operational experience with technologies like Kafka, Spark, Databricks, Apache Iceberg, Apache Druid, Clickhouse, as well as relational and NoSQL databases is also necessary. Join Traceable AI and be part of a dynamic team that is at the forefront of innovation in data platform engineering.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
Are you ready to take your engineering career to the next level As a Mid Lead Engineer, you will have the opportunity to contribute to building state-of-the-art data platforms in AWS by leveraging Python and Spark. Join a dynamic team that is focused on driving innovation and scalability for data solutions in a supportive and hybrid work environment. In this role, you will design, implement, and optimize ETL workflows using Python and Spark, playing a key part in building a robust data lakehouse architecture on AWS. Success in this position requires technical expertise, strong problem-solving skills, and the ability to collaborate effectively within an agile team. Your responsibilities will include designing, building, and maintaining robust, scalable, and efficient ETL pipelines using Python and Spark. You will also develop workflows that leverage AWS services such as EMR Serverless, Glue, Glue Data Catalog, Lambda, and S3. Implementing data quality frameworks and governance practices to ensure reliable data processing will be another crucial aspect of your role. Collaboration with cross-functional teams to gather requirements, provide technical insights, and deliver high-quality solutions is essential. You will be tasked with optimizing existing workflows and driving migration to a modern data lakehouse architecture while integrating Apache Iceberg. Enforcing coding standards, design patterns, and system architecture best practices will be part of your responsibilities. Monitoring system performance and ensuring data reliability through proactive optimizations will be crucial to your success. Additionally, you will contribute to technical discussions, mentor junior team members, and foster a culture of learning and innovation within the team. To excel in this role, you must possess expertise in Python and Spark, along with hands-on experience with AWS EMR Serverless, Glue, Glue Data Catalog, Lambda, S3, and EMR. A strong understanding of data quality frameworks, governance practices, and scalable architectures is essential. Practical knowledge of Apache Iceberg within data lakehouse solutions, problem-solving skills, and experience optimizing workflows for performance and cost-efficiency are key requirements. Experience with Agile methodology, including sprint planning and retrospectives, is desirable. Excellent communication skills for articulating technical solutions to diverse stakeholders will be beneficial. Familiarity with additional programming languages such as Java, experience with serverless computing paradigms, and knowledge of visualization or reporting tools are considered desirable skills. Join us at LSEG, a leading global financial markets infrastructure and data provider, where our purpose is driving financial stability, empowering economies, and enabling customers to create sustainable growth. If you are looking to be part of a collaborative and creative culture that values individuality and encourages new ideas, LSEG offers a dynamic environment where you can bring your true self to work and help enrich our diverse workforce. We are committed to sustainability across our global business and are aiming to re-engineer the financial ecosystem to support and drive sustainable economic growth.,
Posted 2 weeks ago
4.0 - 9.0 years
10 - 20 Lacs
chennai
Work from Office
JD: • Good experience in Apache Iceberg, Apache Spark, Trino • Proficiency in SQL and data modeling • Experience with open Data Lakehouse using Apache Iceberg • Experience with Data Lakehouse architecture with Apache Iceberg and Trino
Posted 2 weeks ago
4.0 - 9.0 years
10 - 20 Lacs
bengaluru
Work from Office
JD: • Good experience in Apache Iceberg, Apache Spark, Trino • Proficiency in SQL and data modeling • Experience with open Data Lakehouse using Apache Iceberg • Experience with Data Lakehouse architecture with Apache Iceberg and Trino
Posted 2 weeks ago
4.0 - 9.0 years
10 - 20 Lacs
hyderabad
Work from Office
JD: • Good experience in Apache Iceberg, Apache Spark, Trino • Proficiency in SQL and data modeling • Experience with open Data Lakehouse using Apache Iceberg • Experience with Data Lakehouse architecture with Apache Iceberg and Trino
Posted 2 weeks ago
9.0 - 11.0 years
0 Lacs
hyderabad, telangana, india
On-site
Job Description Join our dynamic Digital Marketing Data Engineering team at Fanatics, where you&aposll play a critical role in shaping the big data ecosystem that powers our eCommerce and Digital Marketing platforms. As a full-time Staff Data Engineer, youll design, build, and optimize scalable data pipelines and architectures, ensuring seamless data flow and effective collection across cross-functional teams. You will also leverage your backend engineering skills to support API integrations and real-time data exchanges. What We&aposre Looking For BTech/MTech/BS/MS in Computer Science or a related field, or equivalent practical experience. 9+ years of software engineering experience, with a strong track record in building data pipelines and big data solutions. At least 5 years of hands-on experience in Data Engineering roles. Proficiency in Big Data technologies such as: Apache Spark, Apache Iceberg, Amazon Redshift, Athena, EMR Apache Airflow, Apache Kafka AWS services (S3, Lambda) Expertise in at least one programming language: Scala, Java, or Python. Strong background in designing and building data models, integrating data from multiple sources, and developing robust ETL/ELT pipelines. Expert-level SQL programming skills. Proven data analysis and data modeling expertise, with the ability to create data-driven insights and effective visualizations. Familiarity with data quality, lineage, and governance frameworks. Energetic, detail-oriented, and collaborative, with a passion for delivering high-quality solutions. Bonus Points Experience in the e-commerce or retail domain. Knowledge of StarRocks or similar OLAP engines. Experience with Web Services, API integrations, third-party data exchanges, and streaming platforms. A passion for building scalable, high-quality analytics platforms and data products. About Us Fanatics is building a leading global digital sports platform. The company ignites the passions of global sports fans and maximizes the presence and reach for hundreds of sports partners globally by offering innovative products and services across Fanatics Commerce, Fanatics Collectibles, and Fanatics Betting & Gaming, allowing sports fans to Buy, Collect and Bet. Through the Fanatics platform, sports fans can buy licensed fan gear, jerseys, lifestyle and streetwear products, headwear, and hardgoods; collect physical and digital trading cards, sports memorabilia, and other digital assets; and bet as the company builds its Sportsbook and iGaming platform. Fanatics has an established database of over 100 million global sports fans, a global partner network with over 900 sports properties, including major national and international professional sports leagues, teams, players associations, athletes, celebrities, colleges, and college conferences, and over 2,000 retail locations, including its Lids retail business stores. As a market leader with more than 18,000 employees, and hundreds of partners, suppliers, and vendors worldwide, we take responsibility for driving toward more ethical and sustainable practices. We are committed to building an inclusive Fanatics community, reflecting and representing society at every level of the business, including our employees, vendors, partners and fans. Fanatics is also dedicated to making a positive impact in the communities where we all live, work, and play through strategic philanthropic initiatives. About The Team Fanatics Commerce is a leading designer, manufacturer, and seller of licensed fan gear, jerseys, lifestyle and streetwear products, headwear, and hardgoods. It operates a vertically-integrated platform of digital and physical capabilities for leading sports leagues, teams, colleges, and associations globally as well as its flagship site, www.fanaccs.com. Fanatics Commerce has a broad range of online, sports venue, and vertical apparel partnerships worldwide, including comprehensive partnerships with leading leagues, teams, colleges, and sports organizations across the worldincluding the NFL, NBA, MLB, NHL, MLS, Formula 1, and Australian Football League (AFL); the Dallas Cowboys, Golden State Warriors, Paris Saint-Germain, Manchester United, Chelsea FC, and Tokyo Giants; the University of Notre Dame, University of Alabama, and University of Texas; the International Olympic Committee (IOC), England Rugby, and the Union of European Football Association (UEFA). Show more Show less
Posted 2 weeks ago
8.0 - 10.0 years
20 - 22 Lacs
gurugram
Work from Office
About the Role We are seeking a highly skilled Senior Data Engineer to design, build, and optimize scalable data platforms that empower data-driven decision-making. This role requires deep technical expertise in modern data engineering frameworks, architectural patterns, and cloud-native solutions on AWS. You will be a key contributor to our data strategy, ensuring data quality, governance, and reliability while mentoring other engineers in the team. Key Responsibilities Design, develop, and own robust, scalable, and maintainable data pipelines (batch & real-time). Architect and implement Data Lake, Data Warehouse, and Lakehouse solutions using modern frameworks and architectural patterns. Ensure data quality, governance, and integrity across the entire data lifecycle. Monitor, troubleshoot, and optimize the performance of data pipelines. Contribute to and enforce best practices, design principles, and technical documentation. Partner with cross-functional teams to translate business requirements into effective technical solutions. Provide mentorship and guidance to junior data engineers, fostering continuous learning and growth. Good to Have Skills: Bachelors degree in Computer Science, Information Systems, or related field (Masters degree preferred). 8+ years of experience as a Data Engineer, with a proven track record of building large-scale, production-grade pipelines. Expertise in AWS Data Services (S3, Glue, Athena, EMR, Kinesis, etc.). Strong proficiency in SQL and deep understanding of file formats (Parquet, Delta Lake, Apache Iceberg, Hudi, CDC patterns). Hands-on experience with stream processing frameworks (Apache Flink, Kafka Streams, or PySpark). Proficiency in Apache Airflow or similar workflow orchestration tools. Strong knowledge of database systems (relational & NoSQL) and data warehousing concepts. Experience with data integration tools and cloud-based data platforms. Excellent problem-solving skills and ability to work independently in fast-paced environments. Strong communication and collaboration skills to work effectively with both technical and business stakeholders. Passion for emerging technologies and keeping pace with industry best practices.
Posted 2 weeks ago
4.0 - 8.0 years
10 - 15 Lacs
bengaluru
Work from Office
About Role We are looking for an experienced Backend Developer with strong expertise in Python, FastAPI, Socket.IO, and Kubernetes to join our team in Bangalore. The ideal candidate has a proven track record of building and scaling production-grade systems handling 10k+ users, with deep knowledge of microservices, asynchronous programming, and distributed architectures. Responsibilities Design, develop, and maintain backend services and APIs using FastAPI and Python. Build and optimize real-time applications leveraging WebSockets (Socket.IO) and asyncio. Architect scalable microservices and ensure high availability in Kubernetes (K8s) environments. Implement message queues (Celery, RabbitMQ, AWS SQS) for task scheduling and event-driven pipelines. Work with databases (Postgres, MongoDB, Databricks, Iceberg) for efficient data storage and retrieval. Ensure production-grade coding practices with focus on performance, reliability, and security. Collaborate with DevOps to fine-tune deployment strategies and API scaling on Kubernetes. Monitor, troubleshoot, and optimize backend systems for high-traffic environments. Requirements 4+ years of hands-on backend development experience. Strong expertise in Python (production-grade coding), FastAPI, and Socket.IO. Proven experience with asynchronous programming (asyncio). Hands-on experience with Celery, RabbitMQ, or AWS SQS. Strong database knowledge: Postgres, MongoDB, Databricks, Iceberg. Solid understanding of scalable microservices on Kubernetes (beyond just deployments). Experience in production deployments handling 10k+ users. Good to Have Experience in CI/CD pipelines, Docker, monitoring tools (Prometheus, Grafana). Exposure to distributed data processing. Contributions to open-source projects or technical blogs.
Posted 2 weeks ago
7.0 - 12.0 years
10 - 15 Lacs
bengaluru
Hybrid
Hiring an AWS Data Engineer for a 6-month hybrid contractual role based in Bellandur, Bengaluru. The ideal candidate will have 7+ years of experience in data engineering, with strong expertise in AWS services (S3, EC2, RDS, Lambda, EKS), PostgreSQL, Redis, Apache Iceberg, and Graph/Vector Databases. Proficiency in Python or Golang is essential. Responsibilities include designing and optimizing data pipelines on AWS, managing structured and in-memory data, implementing advanced analytics with vector/graph databases, and collaborating with cross-functional teams. Prior experience with CI/CD and containerization (Docker/Kubernetes) is a plus.
Posted 2 weeks ago
4.0 - 6.0 years
4 - 8 Lacs
bengaluru
Hybrid
Hiring an AWS Data Engineer for a 6-month hybrid contractual role based in Bellandur, Bengaluru. The ideal candidate will have 46 years of experience in data engineering, with strong expertise in AWS services (S3, EC2, RDS, Lambda, EKS), PostgreSQL, Redis, Apache Iceberg, and Graph/Vector Databases. Proficiency in Python or Golang is essential. Responsibilities include designing and optimizing data pipelines on AWS, managing structured and in-memory data, implementing advanced analytics with vector/graph databases, and collaborating with cross-functional teams. Prior experience with CI/CD and containerization (Docker/Kubernetes) is a plus.
Posted 2 weeks ago
5.0 - 10.0 years
20 - 25 Lacs
bengaluru
Work from Office
The Platform Data Engineer will be responsible for designing and implementing robust data platform architectures, integrating diverse data technologies, and ensuring scalability, reliability, performance, and security across the platform. The role involves setting up and managing infrastructure for data pipelines, storage, and processing, developing internal tools to enhance platform usability, implementing monitoring and observability, collaborating with software engineering teams for seamless integration, and driving capacity planning and cost optimization initiatives.
Posted 2 weeks ago
5.0 - 8.0 years
0 Lacs
india
On-site
When 5% of Indian households shop with us, it's important to build data-backed, resilient systems to manage millions of orders every day. We've done this - with zero downtime! ???? Sounds impossible Well, that's the kind of Engineering muscle that has helped Meesho become the e-commerce giant that it is today. We value speed over perfection, and see failures as opportunities to become better. We've taken steps to inculcate a strong Founder's Mindset across our engineering teams, making us grow and move fast. We place special emphasis on the continuous growth of each team member - and we do this with regular 1-1s and open communication. Tech Culture We have a unique tech culture where engineers are seen as problem solvers. The engineering org is divided into multiple pods and each pod is aligned to a particular business theme. It is a culture driven by logical debates & arguments rather than authority. At Meesho, you get to solve hard technical problems at scale as well as have a significant impact on the lives of millions of entrepreneurs. You are expected to contribute to the Solutioning of product problems as well as challenge existing solutions. Meesho's user base has grown 4x in the last 1 year and we have more than 50 million downloads of our app. Here are a few projects we have completed last year to scale oursystems for this growth: . We have developed API gateway aggregators using frameworks like Hystrix and spring-cloud-gateway for circuit breaking and parallel processing. . Our serving microservices handle more than 15K RPS on normal days and during saledays this can go to 30K RPS. Being a consumer app, these systems have SLAs of 10ms . Our distributed scheduler tracks more than 50 million shipments periodically fromdifferent partners and does async processing involving RDBMS. . We use an in-house video streaming platform to support a wide variety of devices and networks. What You'll Do Design and implement scalable and fault-tolerant data pipelines (batch and streaming) using frameworks like Apache Spark , Flink , and Kafka . Lead the design and development of data platforms and reusable frameworks that serve multiple teams and use cases. Build and optimize data models and schemas to support large-scale operational and analytical workloads. Deeply understand Apache Spark internals and be capable of modifying or extending the open-source Spark codebase as needed. Develop streaming solutions using tools like Apache Flink , Spark Structured Streaming . Drive initiatives that abstract infrastructure complexity , enabling ML, analytics, and product teams to build faster on the platform. Champion a platform-building mindset focused on reusability , extensibility , and developer self-service . Ensure data quality, consistency, and governance through validation frameworks, observability tooling, and access controls. Optimize infrastructure for cost, latency, performance , and scalability in modern cloud-native environments . Mentor and guide junior engineers , contribute to architecture reviews, and uphold high engineering standards. Collaborate cross-functionally with product, ML, and data teams to align technical solutions with business needs. What We're Looking For 5-8 years of professional experience in software/data engineering with a focus on distributed data systems . Strong programming skills in Java , Scala , or Python , and expertise in SQL . At least 2 years of hands-on experience with big data systems including Apache Kafka , Apache Spark/EMR/Dataproc , Hive , Delta Lake , Presto/Trino , Airflow , and data lineage tools (e.g., Datahb,Marquez, OpenLineage). Experience implementing and tuning Spark/Delta Lake/Presto at terabyte-scale or beyond. Strong understanding of Apache Spark internals (Catalyst, Tungsten, shuffle, etc.) with experience customizing or contributing to open-source code. Familiarity and worked with modern open-source and cloud-native data stack components such as: Apache Iceberg , Hudi , or Delta Lake Trino/Presto , DuckDB , or ClickHouse,Pinot ,Druid Airflow , Dagster , or Prefect DBT , Great Expectations , DataHub , or OpenMetadata Kubernetes , Terraform , Docker Strong analytical and problem-solving skills , with the ability to debug complex issues in large-scale systems. Exposure to data security, privacy, observability , and compliance frameworks is a plus. Good to Have Contributions to open-source projects in the big data ecosystem (e.g., Spark, Kafka, Hive, Airflow) Hands-on data modeling experience and exposure to end-to-end data pipeline development Familiarity with OLAP data cubes and BI/reporting tools such as Tableau, Power BI, Superset, or Looker Working knowledge of tools and technologies like ELK Stack (Elasticsearch, Logstash, Kibana) , Redis , and MySQL Exposure to backend technologies including RxJava , Spring Boot , and Microservices architecture About us Welcome to Meesho, where every story begins with a spark of inspiration and a dash of entrepreneurial spirit. We're not just a platform we're your partner in turning dreams into realities. Curious about life at Meesho Explore our Glassdoor - our people have a lot to say and they've helped us become a loved workplace in India. Our Mission Democratising internet commerce for everyone - Meesho (Meri Shop) started with a single idea in mind: to be an e-commerce destination for Indian consumers and to enable small businesses to succeed online. We provide our sellers with benefits such as zero commission and affordable shipping solutions in the market. Today, sellers nationwide are growing their businesses by tapping into Meesho's large and diverse customer base, state-of-the-art tech infrastructure, and pan-India logistics network through trusted third-party partners. Affordable, relatable merchandise that mirrors local markets has helped us connect with internet users and serve customers across urban, semi-urban, and rural India. Our unique business model and continuous innovation have established us as a part of India's e-commerce ecosystem. Culture and Total Rewards Our focus is on cultivating a dynamic workplace characterized by high impact and performance excellence. We prioritize a people-centric culture, dedicated to hiring and developing exceptional talent. Total rewards at Meesho comprise a comprehensive set of elements - monetary, non-monetary, tangible, and intangible. Our 9 guiding principles, or Mantras, are the backbone of how we operate, influencing everything from recognition and evaluation to growth discussions. Daily rituals and processes like Problem First Mindset, Listen or Die, our Internal Mobility Program, Talent Reviews, and Continuous Performance Management embody these principles. We offer competitive compensation - both cash and equity-based - tailored to job roles, individual experience, and skill, along with employee-centric benefits and a supportive work environment. Our holistic wellness program, MeeCare, includes benefits across physical, mental, financial, and social wellness. This includes extensive medical insurance for employees and their families, wellness initiatives like telehealth, wellness events, and fitness-related perks. To support work-life balance, we offer generous leave policies, parental support, retirement benefits, and learning and development assistance. Through personalized recognition, gratitude for stretched work, and engaging activities, we promote employee delight at the workplace. Additional benefits such as salary advance support, relocation assistance, and flexible benefit plans further enrich the Meesho experience. Know more about Meesho here :
Posted 2 weeks ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
As a Software Development Engineer at our company, you will have the opportunity to work in either Hyderabad, Telangana, India or Bengaluru, Karnataka, India. You should hold a Bachelor's degree in Computer Science, a related technical field, or have equivalent practical experience. With at least 8 years of experience in software development using general-purpose programming languages, you will be responsible for designing and building solutions for data migrations and developing software for Data Warehouse migration. Your expertise in programming languages such as Java, C/C++, Python, or Go will be crucial for this role. Additionally, experience in systems engineering with designing and building distributed processing systems using languages like Java, Python, or Scala will be highly beneficial. Knowledge of data warehouse design and developing enterprise data warehouse solutions is preferred. You should also have experience in data analytics and be able to leverage data systems to provide insights and collaborate in business selections. Familiarity with modern Open Source Technologies in the big data ecosystem, including frameworks like Apache Spark, Apache Hive, and Apache Iceberg, will be an advantage. In this role, you will work on solving problems by designing solutions that facilitate data migrations from petabytes to exabytes per day. Your work will accelerate customers" journey to Business Intelligence (BI) and Artificial Intelligence (AI) by delivering multiple products to external Google Cloud Platform (GCP) customers. Your responsibilities will include designing and developing software for Data Warehouse migration, collaborating with Product and Engineering Managers, providing technical direction and mentorship to the team, owning the end-to-end delivery of new features, and engaging with cross-functional partners to build and deliver integrated solutions. Join us in our mission to accelerate organizations" ability to digitally transform their business and industry by delivering enterprise-grade solutions that leverage cutting-edge technology and tools. As a trusted partner, we help customers around the world enable growth and solve their most critical business problems.,
Posted 3 weeks ago
8.0 - 12.0 years
15 - 20 Lacs
bengaluru
Hybrid
We are looking for a highly skilled Scala Data Engineer to design, build, and optimize large-scale data platforms. The ideal candidate will have deep expertise in Scala, Spark, and SQL, with proven experience delivering scalable and high-performance data solutions in cloud-native environments. Key Responsibilities Design, build, and optimize scalable data pipelines using Apache Spark and Scala. Develop real-time streaming pipelines leveraging Kafka/Event Hubs. Own the design and architecture of data systems with a strong focus on performance, scalability, and reliability. Collaborate with cross-functional teams to deliver high-quality data products. Mentor junior engineers and enforce best practices in coding, testing, and data engineering standards. Implement and maintain data governance and lineage practices. Mandatory Skills Strong programming expertise in Scala (preferred over Java). Advanced proficiency in SQL and Apache Spark. Strong understanding of Data Structures & Algorithms. Hands-on experience in data engineering for large-scale systems. Exposure to cloud-native environments (Azure preferred). Preferred Skills Big Data Ecosystem: Hadoop, Kafka, Structured Streaming/Event Hub. CI/CD tools: Git, Docker, Jenkins. Experience with Medallion Architecture, Parquet, Apache Iceberg. Orchestration tools: Airflow, Oozie. Familiarity with NoSQL DBs (Cassandra, MongoDB, etc.). Experience with Data Governance tools: Alation, Collibra, Lineage, Metadata management. Location-Bangalore Hybrid (3 days WFO/week)
Posted 3 weeks ago
0.0 years
0 Lacs
bengaluru, karnataka, india
On-site
ROLE PROFILE: Are you ready to take your engineering career to the next level Join us as a Mid Lead Engineer and contribute to building state-of-the-art data platforms in AWS , leveraging Python and Spark . Be part of a dynamic team, driving innovation and scalability for data solutions in a supportive and hybrid work environment. ROLE SUMMARY: This role is ideal for an experienced data engineer looking to step into a leadership position while remaining hands-on with cutting-edge technologies. You will design, implement and optimize ETL workflows using Python and Spark, contributing to our robust data lakehouse architecture on AWS. Success in this role requires technical expertise, strong problem-solving skills and the ability to collaborate effectively within an agile team. WHAT YOU&aposLL BE DOING: Design, build, and maintain robust, scalable and efficient ETL pipelines using Python and Spark. Develop workflows leveraging AWS services such as EMR Serverless, Glue, Glue Data Catalog, Lambda and S3. Implement data quality frameworks and governance practices to ensure reliable data processing. Collaborate with cross-functional teams to gather requirements, provide technical insights and deliver high-quality solutions. Optimize existing workflows and drive migration to a modern data lakehouse architecture, integrating Apache Iceberg. Enforce coding standards, design patterns, and system architecture best practices. Monitor system performance and ensure data reliability through proactive optimizations. Contribute to technical discussions, mentor junior team members and foster a culture of learning and innovation. WHAT YOU&aposLL BRING: Essential Skills Expertise in Python and Spark, with proven experience in designing and implementing data workflows. Hands-on experience with AWS EMR Serverless, Glue, Glue Data Catalog, Lambda, S3, and EMR. Strong understanding of data quality frameworks, governance practices and scalable architectures. Practical knowledge of Apache Iceberg within data lakehouse solutions. Problem-solving skills and experience optimizing workflows for performance and cost-efficiency. Agile methodology experience, including sprint planning and retrospectives. Excellent communication skills for articulating technical solutions to diverse stakeholders. Desirable Skills Familiarity with additional programming languages such as Java. Experience with serverless computing paradigms. Knowledge of visualization or reporting tools. LSEG is a leading global financial markets infrastructure and data provider. Our purpose is driving financial stability, empowering economies and enabling customers to create sustainable growth. Our purpose is the foundation on which our culture is built. Our values of Integrity, Partnership , Excellence and Change underpin our purpose and set the standard for everything we do, every day. They go to the heart of who we are and guide our decision making and everyday actions. Working with us means that you will be part of a dynamic organisation of 25,000 people across 65 countries. However, we will value your individuality and enable you to bring your true self to work so you can help enrich our diverse workforce. You will be part of a collaborative and creative culture where we encourage new ideas and are committed to sustainability across our global business. You will experience the critical role we have in helping to re-engineer the financial ecosystem to support and drive sustainable economic growth. Together, we are aiming to achieve this growth by accelerating the just transition to net zero, enabling growth of the green economy and creating inclusive economic opportunity. LSEG offers a range of tailored benefits and support, including healthcare, retirement planning, paid volunteering days and wellbeing initiatives. We are proud to be an equal opportunities employer. This means that we do not discriminate on the basis of anyones race, religion, colour, national origin, gender, sexual orientation, gender identity, gender expression, age, marital status, veteran status, pregnancy or disability, or any other basis protected under applicable law. Conforming with applicable law, we can reasonably accommodate applicants' and employees' religious practices and beliefs, as well as mental health or physical disability needs. Please take a moment to read this privacy notice carefully, as it describes what personal information London Stock Exchange Group (LSEG) (we) may hold about you, what its used for, and how its obtained, your rights and how to contact us as a data subject. If you are submitting as a Recruitment Agency Partner, it is essential and your responsibility to ensure that candidates applying to LSEG are aware of this privacy notice. Show more Show less
Posted 3 weeks ago
5.0 - 7.0 years
0 Lacs
bengaluru, karnataka, india
On-site
We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible. As a Lead Software Engineer at JPMorgan Chase within Asset and Wealth Management, you play a crucial role in an agile team dedicated to improving, developing, and providing reliable, cutting-edge technology solutions that are secure, stable, and scalable. As a key technical contributor, you are tasked with implementing essential technology solutions across diverse technical domains within various business functions to support the firm's strategic goals. Job responsibilities Executes creative software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems Develops secure high-quality production code, and reviews and debugs code written by others Identifies opportunities to eliminate or automate remediation of recurring issues to improve overall operational stability of software applications and systems Leads evaluation sessions with external vendors, startups, and internal teams to drive outcomes-oriented probing of architectural designs, technical credentials, and applicability for use within existing systems and information architecture Leads communities of practice across Software Engineering to drive awareness and use of new and leading-edge technologies Adds to team culture of diversity, opportunity, inclusion, and respect Required qualifications, capabilities, and skills Formal training or certification on software engineering concepts and 5+ years applied experience Demonstrated hands-on experience with Java, Spring/Spring Boot, Python, Postgres, and SQL-related technologies. Hands on experience with AWS data technologies such as ECS, EMR, Glue, Step functions, Lambda, DynamoDB, Athena or SNS/SQS Hands-on practical experience delivering system design, application development, testing, and operational stability Advanced in one or more programming language(s) Proficiency in automation and continuous delivery methods Proficient in all aspects of the Software Development Life Cycle Advanced understanding of agile methodologies such as CI/CD, Application Resiliency, and Security Demonstrated proficiency in software applications and technical processes within a technical discipline (e.g., cloud, artificial intelligence, machine learning, mobile, etc.) Preferred qualifications, capabilities, and skills Exposure to Data Lake, Data Warehouses, Data Mesh. Exposure to Databricks/Snowflake/Starburst, Kafka, Apache Spark, Apache Kafka, Apache Iceberg or equivalent open table format through hands-on experience. Exposure to Terraform In-depth knowledge of the financial services industry and their IT systems Practical cloud native experience
Posted 3 weeks ago
10.0 - 16.0 years
0 Lacs
gautam buddha nagar, uttar pradesh
On-site
You have a fantastic opportunity to join us as a DAML Presales Engineer with a rich experience of 10-16 years. In this role, you will serve as a trusted advisor to our customers, guiding them towards effective DAML-based solutions integrated seamlessly with modern AWS data architectures. Your expertise in AWS data analytics platforms will be instrumental in solving intricate business challenges. Your main responsibilities will include collaborating closely with sales teams and AWS stakeholders to gain insights into client issues and crafting innovative DAML-based smart contract solutions that work harmoniously with AWS Data Lake and analytics services. You will be expected to deliver impactful demonstrations, Proof of Concepts (PoCs), and technical presentations, highlighting the power of DAML solutions and showcasing your proficiency with AWS data platforms like Glue, EMR, and Redshift. It will be crucial for you to promote the strategic utilization of DAML smart contracts in conjunction with data architectures such as Data Lakes, Lakehouses, and Data Warehouses, while ensuring a balance between performance, cost-effectiveness, and specific requirements. Your expertise in leveraging open table formats, preferably Apache Iceberg, for data modeling and solution design will be highly valued. Additionally, effective communication skills will play a key role as you engage with internal sales and marketing teams, AWS partners, and customers, articulating the business value of DAML and AWS data analytics solutions. You will also be responsible for supporting extensive Request for Proposal (RFP) responses by furnishing detailed technical architecture and data roadmap proposals during the presales phase. Building strong relationships with AWS Partner Solution Architects and other stakeholders to unearth and drive continuous business opportunities will be a critical aspect of this role. To excel in this position, you must possess expert knowledge of AWS data lake components such as Glue, EMR, and Redshift. Staying updated on DAML, AWS analytics, and data platform trends will be essential to ensure that our solutions remain innovative and competitive. This position is based in Noida, Uttar Pradesh, India.,
Posted 1 month ago
10.0 - 14.0 years
0 Lacs
chennai, tamil nadu
On-site
You will be responsible for developing, deploying, monitoring, and maintaining ETL Jobs as well as all data engineering and pipeline activities. Your role will involve having a good understanding of DB activities and providing support in DB solutions. Additionally, you must possess proven expertise in SQL queries. Your key responsibilities will include designing and constructing various enterprise procedure constructs using any ETL tool, preferably PentahoDI. You will be expected to provide accurate work estimates, manage efforts across multiple lines of work, design and develop exception handling and data cleansing/standardization procedures, gather requirements from various stakeholders related to ETL automation, as well as design and create data extraction, transformation, and load functions. Moreover, you will be involved in data modeling of complex large data sets, conducting tests, validating data flows, preparing ETL processes according to business requirements, and incorporating all business requirements into design specifications. As for qualifications and experience, you should hold a B.E./B.Tech/MCA degree with at least 10 years of experience in designing and developing large-scale enterprise ETL solutions. Prior experience in any ETL tool, primarily PentahoDI, and a good understanding of databases along with expertise in writing SQL queries are essential. In terms of skills and knowledge, you should have experience in full lifecycle software development and production support for DWH systems, data analysis, modeling, and design specific to a DWH/BI environment. Exposure to developing ETL packages and jobs using SPOON, scheduling Pentaho ETL Jobs in crontab, as well as familiarity with Hadoop, Hive, PIG, SQL scripting, data loading tools like Flume, Sqoop, workflow/schedulers like Oozie, and migrating existing dataflows into Big Data platforms are required. Experience in any open-source BI and databases will be considered advantageous. Joining us will provide you with impactful work where you will play a pivotal role in safeguarding Tanla's assets, data, and reputation in the industry. You will have tremendous growth opportunities as part of a rapidly growing company in the telecom and CPaaS space, with chances for professional development. Moreover, you will have the opportunity to work in an innovative environment alongside a world-class team, where innovation is celebrated. Tanla is an equal opportunity employer that champions diversity and is committed to creating an inclusive environment for all employees.,
Posted 1 month ago
10.0 - 16.0 years
0 Lacs
noida, uttar pradesh
On-site
As a DAML Presales Engineer with 10-16 years of experience, your primary role will be to act as a trusted advisor to customers, advocating for DAML-based solutions integrated with modern AWS data architectures. Your responsibilities will include collaborating with sales teams and AWS stakeholders to design smart contract solutions using DAML and AWS Data Lake and analytics services. You will be expected to deliver compelling demos, PoCs, and technical presentations showcasing DAML solutions and AWS data platforms like Glue, EMR, and Redshift. Your expertise in AWS data lake components such as Glue, EMR, and Redshift will be crucial in designing solutions that balance performance, cost, and requirements. You should also have a practical understanding of open table formats, preferably Apache Iceberg, and be able to leverage them in data modeling and solution design. Strong communication skills are essential as you will be required to effectively communicate the business value of DAML and AWS data analytics solutions to internal teams, AWS partners, and customers. In addition, you will play a key role in supporting large RFP responses by providing detailed technical architecture and data roadmap proposals during the presales stage. Building and maintaining strong relationships with AWS Partner Solution Architects and other stakeholders will be critical in identifying and driving ongoing business opportunities. It is essential to stay current on DAML, AWS analytics, and data platform trends to ensure that solutions remain innovative and competitive. To excel in this role, you must have a passion for data analytics and the ability to advocate trends and solutions to technical and non-technical audiences. Strong collaboration skills, experience in presales activities, and working knowledge of data visualization tools such as QuickSight, Tableau, and Power BI will be advantageous. If you are looking for a challenging opportunity to work at the intersection of DAML and AWS data analytics in Noida, this role might be the perfect fit for you.,
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |