Jobs
Interviews

6030 Scala Jobs - Page 7

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

0 Lacs

karnataka

On-site

Looking for a DBT Developer with 5 to 10 years of experience We invite applications for the role of Lead Consultant, DBT Data Engineer! As a DBT Data Engineer, you will be responsible for providing technical direction and leading a group of one or more developers to achieve a common goal. Your responsibilities will include designing, developing, and automating ETL processes using DBT and AWS. You will be tasked with building robust data pipelines to transfer data from various sources to data warehouses or data lakes. Collaborating with cross-functional teams is crucial to ensure data accuracy, completeness, and consistency. Data cleansing, validation, and transformation are essential to maintain data quality and integrity. Optimizing database and query performance will be part of your responsibilities to ensure efficient data processing. Working closely with data analysts and data scientists, you will provide clean and reliable data for analysis and modeling. Your role will involve writing SQL queries against Snowflake, developing scripts for Extract, Load, and Transform operations. Hands-on experience with Snowflake utilities such as SnowSQL, SnowPipe, Tasks, Streams, Time travel, Cloning, Optimizer, Metadata Manager, data sharing, stored procedures, and UDFs is required. Proficiency with Snowflake cloud data warehouse and AWS S3 bucket or Azure blob storage container for data integration is necessary. Additionally, you should have solid experience in Python/Pyspark integration with Snowflake and cloud services like AWS/Azure. A sound understanding of ETL tools and data integration techniques is vital for this role. You will collaborate with business stakeholders to grasp data requirements and develop ETL solutions accordingly. Strong programming skills in languages like Python, Java, and/or Scala are expected. Experience with big data technologies such as Kafka and cloud computing platforms like AWS is advantageous. Familiarity with database technologies such as SQL, NoSQL, and/or Graph databases is beneficial. Your experience in requirement gathering, analysis, designing, development, and deployment will be valuable. Building data ingestion pipelines and deploying using CI/CD tools like Azure boards, Github, and writing automated test cases are desirable skills. Client-facing project experience and knowledge of Snowflake Best Practices will be beneficial in this role. If you are a skilled DBT Data Engineer with a passion for data management and analytics, we encourage you to apply for this exciting opportunity!,

Posted 3 days ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are aligned with business objectives. You will engage in problem-solving discussions, contribute to the overall project strategy, and continuously refine your skills to enhance application performance and user experience. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Assist in the documentation of application processes and workflows. - Engage in code reviews to ensure quality and adherence to best practices. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Strong understanding of data integration techniques and ETL processes. - Experience with cloud computing platforms and services. - Familiarity with programming languages such as Python or Scala. - Knowledge of data governance and security best practices. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Chennai office. - A 15 years full time education is required., 15 years full time education

Posted 3 days ago

Apply

2.0 years

0 Lacs

India

On-site

At H1, we believe access to the best healthcare information is a basic human right. Our mission is to provide a platform that can optimally inform every doctor interaction globally. This promotes health equity and builds needed trust in healthcare systems. To accomplish this our teams harness the power of data and AI-technology to unlock groundbreaking medical insights and convert those insights into action that result in optimal patient outcomes and accelerates an equitable and inclusive drug development lifecycle. Visit h1.co to learn more about us. As a Software Engineer on the search Engineering team you will support and develop the search infrastructure of the company. This involves working with TB’s of data, indexing, ranking and retrieval of medical data to support the search in backend infra. What You'll Do At H1 The Search Engineering team is responsible for developing and maintaining the company's core search infrastructure. Our objective is to enable fast, accurate, and scalable search across terabytes of medical data. This involves building systems for efficient data ingestion, indexing, ranking, and retrieval that power key product features and user experiences. As a Software Engineer on the Search Engineering team, your day typically includes: Working with our search infrastructure – writing and maintaining code that ingests large-scale data in Elasticsearch. Designing and implementing high-performance APIs that serve search use cases with low latency. Building and maintaining end-to-end features using Node.js and GraphQL, ensuring scalability and maintainability. Collaborating with cross-functional teams – including product managers and data engineers to align on technical direction and deliver impactful features to our users. Take ownership of the search codebase–proactively debug, troubleshoot, and resolve issues quickly to ensure stability and performance. Consistently produce simple, elegant designs and write high-quality, maintainable code that can be easily understood and reused by teammates. Demonstrate a strong focus on performance optimization, ensuring systems are fast, efficient, and scalable. Communicate effectively and collaborate across teams in a fast-paced, dynamic environment. Stay up to date with the latest advancements in AI and search technologies, identifying opportunities to integrate cutting-edge capabilities into our platforms. About You You bring strong hands-on technical skills and experience in building robust backend APIs. You thrive on solving complex challenges with innovative, scalable solutions and take pride in maintaining high code quality through thorough testing.You are able to align your work with broader organizational goals and actively contribute to strategic initiatives. You proactively identify risks and propose solutions early in the project lifecycle to avoid downstream issues.You are curious, eager to learn, and excited to grow in a collaborative, high-performing engineering team environment. Requirements 1–2 years of professional experience. Strong programming skills in TypeScript, Node.js, and Python (Mandatory) Practical experience with Docker and Kubernetes Good to have: Big Data technologies (e.g., Scala, Hadoop, PySpark), Golang, GraphQL, Elasticsearch, and LLMs Not meeting all the requirements but still feel like you’d be a great fit? Tell us how you can contribute to our team in a cover letter! H1 OFFERS Full suite of health insurance options, in addition to generous paid time off Pre-planned company-wide wellness holidays Retirement options Health & charitable donation stipends Impactful Business Resource Groups Flexible work hours & the opportunity to work from anywhere The opportunity to work with leading biotech and life sciences companies in an innovative industry with a mission to improve healthcare around the globe

Posted 3 days ago

Apply

12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Position Summary... What you'll do... About team: Walmart’s Enterprise Business Services (EBS) is a powerhouse of several exceptional teams delivering world-class technology solutions and services making a profound impact at every level of Walmart. As a key part of Walmart Global Tech, our teams set the bar for operational excellence and leverage emerging technology to support millions of customers, associates, and stakeholders worldwide. Each time an associate turns on their laptop, a customer makes a purchase, a new supplier is onboarded, the company closes the books, physical and legal risk is avoided, and when we pay our associates consistently and accurately, that is EBS. Joining EBS means embarking on a journey of limitless growth, relentless innovation, and the chance to set new industry standards that shape the future of Walmart. What you'll do: Manage a high performing team of 10-12 engineers who work across multiple technology stacks including Java and Mainframe Drive design, development, implementation and documentation Establish best engineering and operational excellence practices based on product, engineering and scrum metrics Interact with Walmart engineering teams across geographies to leverage expertise and contribute to the tech community. Engage with Product and Business stakeholders to drive the agenda, set the priorities and deliver scalable and resilient products. Work closely with the Architects and cross functional teams and follow established practices for the delivery of solutions meeting QCD (Quality, Cost & Delivery) within the established architectural guidelines. Work with senior leadership to chart out the future roadmap of the products Participate in hiring, mentoring and building high performing agile teams. Participating in organizational events like hackathons, demo days etc. and be the catalyst towards the success of those events Interact closely for requirements with Business owners and technical teams both within India and across the globe. W hat you'll bring: Bachelor's/Master’s degree in Computer Science, engineering, or related field, with minimum 12+ years of experience in software development and at least 5+ years of experience in managing engineering teams. Have prior experience in managing high performing agile technology teams. Hands on experience building Java-Scala-Spark based backend systems is a must, and experience of working in cloud based solutions is desirable Proficiency in Javascript, NodeJS, ReactJS and NextJS is desirable. A good understanding of CS Fundamentals, Microservices, Data Structures, Algorithms & Problem Solving Should have exposed to CI/CD development environments/tools including, but not limited to, Git, Maven, Jenkins. Strong in writing modular and testable code and test cases (unit, functional and integration) using frameworks like JUnit, Mockito, and Mock MVC Should be experienced in microservices architecture. Posseses good understanding of distributed concepts, common design principles, design patterns and cloud native development concepts. Hands-on experience in Spring boot, concurrency, garbage collection, RESTful services, data caching services and ORM tools. Experience working with Relational Database and writing complex OLAP, OLTP and SQL queries. Experience in working with NoSQL Databases like cosmos DB. Experience in working with Caching technology like Redis, Mem cache or other related Systems. Good knowledge in Pub sub system like Kafka. Experience utilizing monitoring and alert tools like Prometheus, Splunk, and other related systems and excellent in debugging and troubleshooting issues. Exposure to Containerization tools like Docker, Helm, Kubernetes. Knowledge of public cloud platforms like Azure, GCP etc. will be an added advantage. About Walmart Global Tech Imagine working in an environment where one line of code can make life easier for hundreds of millions of people. That’s what we do at Walmart Global Tech. We’re a team of software engineers, data scientists, cybersecurity expert's and service professionals within the world’s leading retailer who make an epic impact and are at the forefront of the next retail disruption. People are why we innovate, and people power our innovations. We are people-led and tech-empowered. We train our team in the skillsets of the future and bring in experts like you to help us grow. We have roles for those chasing their first opportunity as well as those looking for the opportunity that will define their career. Here, you can kickstart a great career in tech, gain new skills and experience for virtually every industry, or leverage your expertise to innovate at scale, impact millions and reimagine the future of retail. Flexible, hybrid work We use a hybrid way of working with primary in office presence coupled with an optimal mix of virtual presence. We use our campuses to collaborate and be together in person, as business needs require and for development and networking opportunities. This approach helps us make quicker decisions, remove location barriers across our global team, be more flexible in our personal lives. Benefits Beyond our great compensation package, you can receive incentive awards for your performance. Other great perks include a host of best-in-class benefits maternity and parental leave, PTO, health benefits, and much more. Belonging We aim to create a culture where every associate feels valued for who they are, rooted in respect for the individual. Our goal is to foster a sense of belonging, to create opportunities for all our associates, customers and suppliers, and to be a Walmart for everyone. At Walmart, our vision is "everyone included." By fostering a workplace culture where everyone is—and feels—included, everyone wins. Our associates and customers reflect the makeup of all 19 countries where we operate. By making Walmart a welcoming place where all people feel like they belong, we’re able to engage associates, strengthen our business, improve our ability to serve customers, and support the communities where we operate. Equal Opportunity Employer Walmart, Inc., is an Equal Opportunities Employer – By Choice. We believe we are best equipped to help our associates, customers and the communities we serve live better when we really know them. That means understanding, respecting and valuing unique styles, experiences, identities, ideas and opinions – while being inclusive of all people. Minimum Qualifications... Outlined below are the required minimum qualifications for this position. If none are listed, there are no minimum qualifications. Minimum Qualifications:Option 1: Bachelor's degree in computer science, computer engineering, computer information systems, software engineering, or related area and 5 years’ experience in software engineering or related area. Option 2: 7 years’ experience in software engineering or related area. 2 years’ supervisory experience. Preferred Qualifications... Outlined below are the optional preferred qualifications for this position. If none are listed, there are no preferred qualifications. Master’s degree in computer science, computer engineering, computer information systems, software engineering, or related area and 3 years' experience in software engineering or related area. Primary Location... Rmz Millenia Business Park, No 143, Campus 1B (1St -6Th Floor), Dr. Mgr Road, (North Veeranam Salai) Perungudi , India R-2244602

Posted 3 days ago

Apply

8.0 years

20 - 40 Lacs

India

On-site

Role: Senior Graph Data Engineer (Neo4j & AI Knowledge Graphs) Experience: 8+ years Type: Contract We’re hiring a Graph Data Engineer to design and implement advanced Neo4j-powered knowledge graph systems for our next-gen AI platform. You'll work at the intersection of data engineering, AI/ML, and financial services , helping build the graph infrastructure that powers semantic search, investment intelligence, and automated compliance for venture capital and private equity clients. This role is ideal for engineers who are passionate about graph data modeling , Neo4j performance , and enabling AI-enhanced analytics through structured relationships. What You'll Do Design Knowledge Graphs: Build and maintain Neo4j graph schemas modeling complex fund administration relationships — investors, funds, companies, transactions, legal docs, etc. Graph-AI Integration: Work with GenAI teams to power RAG systems, semantic search, and graph-enhanced NLP pipelines. ETL & Data Pipelines: Develop scalable ingestion pipelines from sources like FundPanel.io, legal documents, and external market feeds using Python, Spark, or Kafka. Optimize Graph Performance: Craft high-performance Cypher queries, leverage APOC procedures, and tune for real-time analytics. Graph Algorithms & Analytics: Implement algorithms for fraud detection, relationship scoring, compliance, and investment pattern analysis. Secure & Scalable Deployment: Implement clustering, backups, and role-based access on Neo4j Aura or containerized environments. Collaborate Deeply: Partner with AI/ML, DevOps, data architects, and business stakeholders to translate use cases into scalable graph solutions. What You Bring 7+ years in software/data engineering; 2+ years in Neo4j and Cypher. Strong experience in graph modeling, knowledge graphs, and ontologies. Proficiency in Python, Java, or Scala for graph integrations. Experience with graph algorithms (PageRank, community detection, etc.). Hands-on with ETL pipelines, Kafka/Spark, and real-time data ingestion. Cloud-native experience (Neo4j Aura, Azure, Docker/K8s). Familiarity with fund structures, LP/GP models, or financial/legal data a plus. Strong understanding of AI/ML pipelines, especially graph-RAG and embeddings. Use Cases You'll Help Build AI Semantic Search over fund documents and investment entities. Investment Network Analysis for GPs, LPs, and portfolio companies. Compliance Graphs modeling fund terms and regulatory checks. Document Graphs linking LPAs, contracts, and agreements. Predictive Investment Models enhanced by graph relationships. Skills: java,machine learning,spark,apache spark,neo4j aura,ai,azure,cloud-native technologies,data,ai/ml pipelines,scala,python,cypher,graphs,ai knowledge graphs,graph data modeling,apoc procedures,semantic search,etl pipelines,data engineering,neo4j,etl,cypher query,pipelines,graph schema,kafka,kafka streams,graph algorithms

Posted 3 days ago

Apply

2.0 - 4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Summary As a Data Analyst, you will be responsible for Design, develop, and maintain efficient and scalable data pipelines for data ingestion, transformation, and storage. About The Role Location – Hyderabad Hybrid About The Role: As a Data Analyst, you will be responsible for Design, develop, and maintain efficient and scalable data pipelines for data ingestion, transformation, and storage. Key Responsibilities: Design, develop, and maintain efficient and scalable data pipelines for data ingestion, transformation, and storage. Collaborate with cross-functional teams, including data analysts, business analyst and BI, to understand data requirements and design appropriate solutions. Build and maintain data infrastructure in the cloud, ensuring high availability, scalability, and security. Write clean, efficient, and reusable code in scripting languages, such as Python or Scala, to automate data workflows and ETL processes. Implement real-time and batch data processing solutions using streaming technologies like Apache Kafka, Apache Flink, or Apache Spark. Perform data quality checks and ensure data integrity across different data sources and systems. Optimize data pipelines for performance and efficiency, identifying and resolving bottlenecks and performance issues. Collaborate with DevOps teams to deploy, automate, and maintain data platforms and tools. Stay up to date with industry trends, best practices, and emerging technologies in data engineering, scripting, streaming data, and cloud technologies Essential Requirements: Bachelor's or Master's degree in Computer Science, Information Systems, or a related field with an overall experience of 2-4 Years. Proven experience as a Data Engineer or similar role, with a focus on scripting, streaming data pipelines, and cloud technologies like AWS, GCP or Azure. Strong programming and scripting skills in languages like Python, Scala, or SQL. Experience with cloud-based data technologies, such as AWS, Azure, or Google Cloud Platform. Hands-on experience with streaming technologies, such as AWS Streamsets, Apache Kafka, Apache Flink, or Apache Spark Streaming. Strong experience with Snowflake (Required) Proficiency in working with big data frameworks and tools, such as Hadoop, Hive, or HBase. Knowledge of SQL and experience with relational and NoSQL databases. Familiarity with data modelling and schema design principles. Strong problem-solving skills and the ability to work in a fast-paced, collaborative environment. Excellent communication and teamwork skills. Commitment To Diversity And Inclusion: Novartis is committed to building an outstanding, inclusive work environment and diverse teams' representative of the patients and communities we serve. Accessibility And Accommodation: Novartis is committed to working with and providing reasonable accommodation to individuals with disabilities. If, because of a medical condition or disability, you need a reasonable accommodation for any part of the recruitment process, or in order to perform the essential functions of a position, please send an e-mail to diversityandincl.india@novartis.com and let us know the nature of your request and your contact information. Please include the job requisition number in your message Why Novartis: Helping people with disease and their families takes more than innovative science. It takes a community of smart, passionate people like you. Collaborating, supporting and inspiring each other. Combining to achieve breakthroughs that change patients’ lives. Ready to create a brighter future together? https://www.novartis.com/about/strategy/people-and-culture Join our Novartis Network: Not the right Novartis role for you? Sign up to our talent community to stay connected and learn about suitable career opportunities as soon as they come up: https://talentnetwork.novartis.com/network Benefits and Rewards: Read our handbook to learn about all the ways we’ll help you thrive personally and professionally: https://www.novartis.com/careers/benefits-rewards

Posted 3 days ago

Apply

15.0 years

0 Lacs

India

On-site

Job Summary As part of the data leadership team, the Capability Lead – Databricks will be responsible for building, scaling, and delivering Databricks-based data and AI capabilities across the organization. This leadership role involves technical vision, solution architecture, team building, partnership development, and delivery excellence using Databricks Unified Analytics Platform across industries. The individual will collaborate with clients, alliance partners (Databricks, Azure, AWS), internal stakeholders, and sales teams to drive adoption of lakehouse architectures, data engineering best practices, and AI/ML modernization. Areas of Responsibility 1. Offering and Capability Development: Develop and enhance Snowflake-based data platform offerings and accelerators Define best practices, architectural standards, and reusable frameworks for Snowflake Collaborate with alliance teams to strengthen partnership with Snowflake 2. Technical Leadership: Provide architectural guidance for Snowflake solution design and implementation Lead solutioning efforts for proposals, RFIs, and RFPs involving Snowflake Conduct technical reviews and ensure adherence to design standards. Act as a technical escalation point for complex project challenges 3. Delivery Oversight: Support delivery teams with technical expertise across Snowflake projects Drive quality assurance, performance optimization, and project risk mitigation. Review project artifacts and ensure alignment with Snowflake best practices Foster a culture of continuous improvement and delivery excellence 4. Talent Development: Build and grow a high-performing Snowflake capability team. Define skill development pathways and certification goals for team members. Mentor architects, developers, and consultants on Snowflake technologies Drive community of practice initiatives to share knowledge and innovations 5. Business Development Support: Engage with sales and pre-sales teams to position Snowflake capabilities Contribute to account growth by identifying new Snowflake opportunities Participate in client presentations, workshops, and technical discussions 6. Thought Leadership and Innovation Build thought leadership through whitepapers, blogs, and webinars Stay updated with Snowflake product enhancements and industry trends This role is highly collaborative and will work extremely closely with cross functional teams to fulfill the above responsibilities. Job Requirements: 12–15 years of experience in data engineering, analytics, and AI/ML 3–5 years of strong hands-on experience with Databricks (on Azure, AWS, or GCP) Expertise in Spark (PySpark/Scala), Delta Lake, Unity Catalog, MLflow, and Databricks notebooks Experience designing and implementing Lakehouse architectures at scale Familiarity with data governance, security, and compliance frameworks (GDPR, HIPAA, etc.) Experience with real-time and batch data pipelines (Structured Streaming, Auto Loader, Kafka, etc.) Strong understanding of MLOps and AI/ML lifecycle management Certifications in Databricks (e.g., Databricks Certified Data Engineer Professional, ML Engineer Associate) are preferred Experience with hyperscaler ecosystems (Azure Data Lake, AWS S3, GCP GCS, ADF, Glue, etc.) Experience managing large, distributed teams and working with CXO-level stakeholders Strong problem-solving, analytical, and decision-making skills Excellent verbal, written, and client-facing communication

Posted 3 days ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Notice Period : 30 days Interview and Relocation Details Interview Process: Final interview rounds will be conducted in-person in Chennai only . Work Location: This position is based in Bangalore. Relocation: While the final interviews are in Chennai, candidates, including those currently residing in Chennai, must be willing to relocate to Bangalore within 3-6 months of their start date, or as required by business needs. Important Note for Applicants Please apply only if you are currently based in or around Chennai and are able to attend an in-person interview in Chennai. Additionally, all selected candidates must be willing to relocate to Bangalore within 3-6 months of their start date, or as required. What you will do ● Create beautiful software experiences for our clients using design thinking, lean, and agile methodology. ● Work on software products designed from scratch using the latest cutting-edge technologies, platforms, and languages such as JAVA, Python, JavaScript, GoLang, and Scala. ● Work in a dynamic, collaborative, transparent, non-hierarchical culture. ● Work in collaborative, fast-paced, and value-driven teams to build innovative customer experiences for our clients. ● Help to grow the next generation of developers and have a positive impact on the industry. Basic Qualifications ● Experience: 4+ years. ● Hands-on development experience with a broad mix of languages such as JAVA, Python, JavaScript, etc. ● Server-side development experience mainly in JAVA, (Python and NodeJS can be considerable) ● UI development experience in ReactJS or AngularJS or PolymerJS or EmberJS, or jQuery, etc., is good to have. ● Passion for software engineering and following the best coding concepts. ● Good to great problem solving and communication skills. Nice to have Qualifications ● Product and customer-centric mindset. ● Great OO skills, including design patterns. ● Experience with devops, continuous integration & deployment. ● Exposure to big data technologies, Machine Learning and NLP will be a plus. Benefits ● Competitive salary. ● Work from anywhere. ● Learning and gaining experience rapidly. ● Reimbursement for basic working setup at home. Location Bengaluru - Hybrid / WFO

Posted 3 days ago

Apply

0 years

0 Lacs

India

On-site

Job Summary: We are looking for a skilled Senior Data Engineer with strong expertise in Spark and Scala on the AWS platform. The ideal candidate should possess excellent problem-solving skills and hands-on experience in Spark-based data processing within a cloud-based ecosystem. This role offers the opportunity to independently execute diverse and complex engineering tasks, demonstrate a solid understanding of the end-to-end software development lifecycle, and collaborate effectively with stakeholders to deliver high-quality technical solutions. Key Responsibilities: Develop, analyze, debug, and enhance Spark-Scala programs. Work on Spark batch processing jobs, with the ability to analyze/debug using Spark UI and logs. Optimize performance of Spark applications and ensure scalability and reliability. Manage data processing tasks using AWS S3, AWS EMR clusters, and other AWS services. Leverage Hadoop ecosystem tools including HDFS, HBase, Hive, and MapReduce. Write efficient and optimized SQL queries; experience with PostgreSQL and Couchbase or similar databases is preferred. Utilize orchestration tools such as Kafka, NiFi, and Oozie. Work with monitoring tools like Dynatrace and CloudWatch. Contribute to the creation of High-Level Design (HLD) and Low-Level Design (LLD) documents and participate in reviews with architects. Support development and lower environments setup, including local IDE configuration. Follow defined coding standards, best practices, and quality processes. Collaborate using Agile methodologies for development, review, and delivery. Use supplementary programming languages like Python as needed. Required Skills: Mandatory: Apache Spark Scala Big Data Hadoop Ecosystem Spark SQL Additional Preferred Skills: Spring Core Framework Core Java, Hibernate, Multithreading AWS EMR, S3, CloudWatch HDFS, HBase, Hive, MapReduce PostgreSQL, Couchbase Kafka, NiFi, Oozie Dynatrace or other monitoring tools Python (as supplementary language) Agile Methodology

Posted 3 days ago

Apply

5.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Expedia Group brands power global travel for everyone, everywhere. We design cutting-edge tech to make travel smoother and more memorable, and we create groundbreaking solutions for our partners. Our diverse, vibrant, and welcoming community is essential in driving our success. Why Join Us? To shape the future of travel, people must come first. Guided by our Values and Leadership Agreements, we foster an open culture where everyone belongs, differences are celebrated and know that when one of us wins, we all win. We provide a full benefits package, including exciting travel perks, generous time-off, parental leave, a flexible work model (with some pretty cool offices), and career development resources, all to fuel our employees' passion for travel and ensure a rewarding career journey. We’re building a more open world. Join us. Introduction to team Our Expedia Product & Technology division builds innovative products, services, and tools to deliver high-quality experiences for travellers, partners, and our employees. A unified, singular technology platform powered by data and machine learning provides secure, differentiated, and personalised experiences for the traveler and our partners that drive loyalty and customer satisfaction. The goal of Media Solutions(MeSo) Tech team is to spearhead advertising products and services across many Expedia brands including BEX, Hotels.com, Portfolio Brands (COMET & Hotwire.com), Expedia Partner Sites (EPS), and Vrbo. We help our advertisers identify travellers on EG sites, target specific traveller criteria, and then deliver the most relevant products. As a Senior Software Development Engineer, you will propose, design and implement various initiatives. As a member of the team, you will work in alliance with global teams providing the technical expertise needed to overcome hard problems. We value rigor and innovative thinking in our development process and believe in the power of a motivated & agile development team. What You’ll Do Join a high-performing team and have a unique opportunity to make a highly visible impact Learn best practices and how to constantly raise the bar in terms of engineering excellence Identify inefficiencies in code or systems operation and offer suggestions for improvements Expand your skills in developing high quality, distributed and scalable software Share new skills and knowledge with the team to increase efficiency Write code that is clean, maintainable, and optimized with good naming conventions Develop fast, scalable, and highly available services Participate in code reviews and pull requests Who You Are 5+ years software development work experience using modern languages i.e. Java/Kotlin Bachelor’s in computer science or related technical field; or equivalent related professional experience Experience in Java, Scala, Kotlin, AWS, Kafka, S3, Lambda, Docker, Datadog Problem solver with a good understanding of algorithms, data structures, and distributed applications Solid understanding of Object-Oriented Programming concepts, data structure, algorithms, and test-driven development Solid understanding of load balancing, caching, database partitioning, caching to improve application scalability Demonstrated ability to develop and support large-sized internet-scale software systems Experience in AWS Service Knowledge of No-SQL databases and cloud computing concepts Sound understanding of client-side optimization best practices Ability to quickly pick up new technologies, languages with ease Working knowledge of Agile Software Development methodologies Strong verbal and written communication skills Passionate about quality of work Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach out to our Recruiting Accommodations Team through the Accommodation Request. We are proud to be named as a Best Place to Work on Glassdoor in 2024 and be recognized for award-winning culture by organizations like Forbes, TIME, Disability:IN, and others. Expedia Group's family of brands includes: Brand Expedia®, Hotels.com®, Expedia® Partner Solutions, Vrbo®, trivago®, Orbitz®, Travelocity®, Hotwire®, Wotif®, ebookers®, CheapTickets®, Expedia Group™ Media Solutions, Expedia Local Expert®, CarRentals.com™, and Expedia Cruises™. © 2024 Expedia, Inc. All rights reserved. Trademarks and logos are the property of their respective owners. CST: 2029030-50 Employment opportunities and job offers at Expedia Group will always come from Expedia Group’s Talent Acquisition and hiring teams. Never provide sensitive, personal information to someone unless you’re confident who the recipient is. Expedia Group does not extend job offers via email or any other messaging tools to individuals with whom we have not made prior contact. Our email domain is @expediagroup.com. The official website to find and apply for job openings at Expedia Group is careers.expediagroup.com/jobs. Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age.

Posted 3 days ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Position Summary: This role is accountable for running day-to-day operations of the Data Platform in Azure / AWS Databricks.The role involves designing and implementing data ingestion pipelines from multiple sources using Azure Databricks, ensuring seamless and efficient pipeline executions, and adhering to security, regulatory, and audit control guidelines. Key Responsibilities: ● Design and implement data ingestion pipelines from multiple sources using Azure Databricks. ● Ensure data pipelines run smoothly and efficiently with minimal downtime. ● Develop scalable and reusable frameworks for ingesting large and complex datasets. ● Integrate end-to-end data pipelines, ensuring quality and consistency from source systems to target repositories. ● Work with event-based and streaming technologies to ingest and process data in real-time. ● Collaborate with other project team members to deliver additional components such as API interfaces and search functionalities. ● Evaluate the performance and applicability of various tools against customer requirements and provide recommendations. ● Provide technical advice to the team and assist in issue resolution, leveraging strong Cloud and Databricks knowledge. ● Provide on-call, after-hours, and weekend support as needed to maintain platform stability. ● Fulfil service requests related to the Data Analytics platform efficiently. ● Lead and drive optimisation and continuous improvement initiatives within the team. ● Conduct technical reviews of changes as part of release management, acting as a gatekeeper for production deployments. ● Adhere to data security standards and implement required controls within the platform. ● Lead the design, development, and deployment of advanced data pipelines and analytical workflows on the Databricks Lakehouse platform. ● Collaborate with data scientists, engineers, and business stakeholders to build and scale end-to-end data solutions. ● Own architectural decisions to ensure alignment with data governance, security, and compliance requirements. ● Mentor and guide a team of data engineers, providing technical leadership and supporting career development. ● Implement CI/CD practices for data engineering pipelines using tools like Azure DevOps, GitHub Actions, or Jenkins. Qualifications and Experience: ● Bachelor’s degree in IT, Computer Science, Software Engineering, Business Analytics, or equivalent ● Minimum of 7+ years of experience in the data analytics field. ● Proven experience with Azure/AWS Databricks in building and optimising data pipelines, ● architectures, and datasets. ● Strong expertise in Scala or Python, PySpark, and SQL for data engineering tasks. ● Ability to troubleshoot and optimize complex queries on the Spark platform. ● Knowledge of structured and unstructured data design, modelling, access, and storage techniques. ● Experience designing and deploying data applications on cloud platforms such as Azure or AWS. ● Hands-on experience in performance tuning and optimising code running in Databricks environment ● Strong analytical and problem-solving skills, particularly within Big Data environments. ● Experience with Big Data management tools and technologies including Cloudera, Python, Hive, Scala, Data Warehouse, Data Lake, AWS, Azure. Technical and Professional Skills: Must Have: ● Excellent communication skills with the ability to interact directly with customers. ● Azure/AWS Databricks. ● Python / Scala / Spark / PySpark. ● Strong SQL and RDBMS expertise. ● HIVE / HBase / Impala / Parquet. ● Sqoop, Kafka, Flume. ● Airflow. ● Jenkins or Bamboo. ● Github or Bitbucket. ● Nexus. Good to Have: ● Relevant accredited certifications for Azure, AWS, Cloud Engineering, and/or Databricks. ● Knowledge of Delta Live Tables (DLT).

Posted 3 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

We are seeking a highly skilled and motivated Python , AWS, Big Data Engineer to join our data engineering team. The ideal candidate will have hands-on experience with Hadoop ecosystem, Apache Spark, and programming expertise in Python (PySpark), Scala, and Java. You will be responsible for designing, developing, and optimizing scalable data pipelines and big data solutions to support analytics and business intelligence initiatives.

Posted 3 days ago

Apply

10.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. EY GDS – Data and Analytics (D&A) – Cloud Architect - Manager As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We’re looking for SeniorManagers (GTM +Cloud/ Big Data Architects) with strong technology and data understanding having proven delivery capability in delivery and pre sales. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team. Your Key Responsibilities Have proven experience in driving Analytics GTM/Pre-Sales by collaborating with senior stakeholder/s in the client and partner organization in BCM, WAM, Insurance. Activities will include pipeline building, RFP responses, creating new solutions and offerings, conducting workshops as well as managing in flight projects focused on cloud and big data. Need to work with client in converting business problems/challenges to technical solutions considering security, performance, scalability etc. [ 10- 15 years] Need to understand current & Future state enterprise architecture. Need to contribute in various technical streams during implementation of the project. Provide product and design level technical best practices Interact with senior client technology leaders, understand their business goals, create, architect, propose, develop and deliver technology solutions Define and develop client specific best practices around data management within a Hadoop environment or cloud environment Recommend design alternatives for data ingestion, processing and provisioning layers Design and develop data ingestion programs to process large data sets in Batch mode using HIVE, Pig and Sqoop, Spark Develop data ingestion programs to ingest real-time data from LIVE sources using Apache Kafka, Spark Streaming and related technologies Skills And Attributes For Success Architect in designing highly scalable solutions Azure, AWS and GCP. Strong understanding & familiarity with all Azure/AWS/GCP /Bigdata Ecosystem components Strong understanding of underlying Azure/AWS/GCP Architectural concepts and distributed computing paradigms Hands-on programming experience in Apache Spark using Python/Scala and Spark Streaming Hands on experience with major components like cloud ETLs,Spark, Databricks Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Knowledge of Spark and Kafka integration with multiple Spark jobs to consume messages from multiple Kafka partitions Solid understanding of ETL methodologies in a multi-tiered stack, integrating with Big Data systems like Cloudera and Databricks. Strong understanding of underlying Hadoop Architectural concepts and distributed computing paradigms Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Good knowledge in apache Kafka & Apache Flume Experience in Enterprise grade solution implementations. Experience in performance bench marking enterprise applications Experience in Data security [on the move, at rest] Strong UNIX operating system concepts and shell scripting knowledge To qualify for the role, you must have Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Ability to multi-task under pressure and work independently with minimal supervision. Strong verbal and written communication skills. Must be a team player and enjoy working in a cooperative and collaborative team environment. Adaptable to new technologies and standards. Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support Responsible for the evaluation of technical risks and map out mitigation strategies Experience in Data security [on the move, at rest] Experience in performance bench marking enterprise applications Working knowledge in any of the cloud platform, AWS or Azure or GCP Excellent business communication, Consulting, Quality process skills Excellent Consulting Skills Excellence in leading Solution Architecture, Design, Build and Execute for leading clients in Banking, Wealth Asset Management, or Insurance domain. Minimum 7 years hand-on experience in one or more of the above areas. Minimum 10 years industry experience Ideally, you’ll also have Strong project management skills Client management skills Solutioning skills What We Look For People with technical experience and enthusiasm to learn new things in this fast-moving environment What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.

Posted 3 days ago

Apply

0 years

0 - 1 Lacs

Thiruvananthapuram

On-site

Data Science and AI Developer **Job Description:** We are seeking a highly skilled and motivated Data Science and AI Developer to join our dynamic team. As a Data Science and AI Developer, you will be responsible for leveraging cutting-edge technologies to develop innovative solutions that drive business insights and enhance decision-making processes. **Key Responsibilities:** 1. Develop and deploy machine learning models for predictive analytics, classification, clustering, and anomaly detection. 2. Design and implement algorithms for data mining, pattern recognition, and natural language processing. 3. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. 4. Utilize advanced statistical techniques to analyze complex datasets and extract actionable insights. 5. Implement scalable data pipelines for data ingestion, preprocessing, feature engineering, and model training. 6. Stay updated with the latest advancements in data science, machine learning, and artificial intelligence research. 7. Optimize model performance and scalability through experimentation and iteration. 8. Communicate findings and results to stakeholders through reports, presentations, and visualizations. 9. Ensure compliance with data privacy regulations and best practices in data handling and security. 10. Mentor junior team members and provide technical guidance and support. **Requirements:** 1. Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, or a related field. 2. Proven experience in developing and deploying machine learning models in production environments. 3. Proficiency in programming languages such as Python, R, or Scala, with strong software engineering skills. 4. Hands-on experience with machine learning libraries/frameworks such as TensorFlow, PyTorch, Scikit-learn, or Spark MLlib. 5. Solid understanding of data structures, algorithms, and computer science fundamentals. 6. Excellent problem-solving skills and the ability to think creatively to overcome challenges. 7. Strong communication and interpersonal skills, with the ability to work effectively in a collaborative team environment. 8. Certification in Data Science, Machine Learning, or Artificial Intelligence (e.g., Coursera, edX, Udacity, etc.). 9. Experience with cloud platforms such as AWS, Azure, or Google Cloud is a plus. 10. Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka) is an advantage. Data Manipulation and Analysis : NumPy, Pandas Data Visualization : Matplotlib, Seaborn, Power BI Machine Learning Libraries : Scikit-learn, TensorFlow, Keras Statistical Analysis : SciPy Web Scrapping : Scrapy IDE : PyCharm, Google Colab HTML/CSS/JavaScript/React JS Proficiency in these core web development technologies is a must. Python Django Expertise: In-depth knowledge of e-commerce functionalities or deep Python Django knowledge. Theming: Proven experience in designing and implementing custom themes for Python websites. Responsive Design: Strong understanding of responsive design principles and the ability to create visually appealing and user-friendly interfaces for various devices. Problem Solving: Excellent problem-solving skills with the ability to troubleshoot and resolve issues independently. Collaboration: Ability to work closely with cross-functional teams, including marketing and design, to bring creative visions to life. interns must know about how to connect front end with datascience Also must Know to connect datascience to frontend **Benefits:** - Competitive salary package - Flexible working hours - Opportunities for career growth and professional development - Dynamic and innovative work environment Job Type: Full-time Pay: ₹8,000.00 - ₹12,000.00 per month Ability to commute/relocate: Thiruvananthapuram, Kerala: Reliably commute or planning to relocate before starting work (Preferred) Work Location: In person

Posted 3 days ago

Apply

12.0 years

6 - 8 Lacs

Hyderābād

On-site

Welcome to Warner Bros. Discovery… the stuff dreams are made of. Who We Are… When we say, “the stuff dreams are made of,” we’re not just referring to the world of wizards, dragons and superheroes, or even to the wonders of Planet Earth. Behind WBD’s vast portfolio of iconic content and beloved brands, are the storytellers bringing our characters to life, the creators bringing them to your living rooms and the dreamers creating what’s next… From brilliant creatives, to technology trailblazers, across the globe, WBD offers career defining opportunities, thoughtfully curated benefits, and the tools to explore and grow into your best selves. Here you are supported, here you are celebrated, here you can thrive. Your New Role : As the Director ML Engineering & Automation, you will lead a dynamic team at the forefront of transforming the Media & Entertainment industry. In this high-impact role, you will be responsible for driving the strategic direction and execution of ML and AI solutions to enhance content personalization, automate workflows, and provide innovative, data-driven experiences. You will work closely with cross-functional teams, including Data Scientists, Software Engineers, and Product Managers, to deliver cutting-edge solutions that fuel business growth and deliver the next generation of entertainment experiences. This is an exciting opportunity to shape the future of entertainment through AI-driven innovation. In this leadership role, you will have the opportunity to build and scale a world-class ML engineering team, while leveraging your expertise to solve some of the most complex and exciting challenges in the industry. This role combines technical excellence, strategic vision, and team leadership to make a lasting impact on the company's mission. Your Role Accountabilities: 1. Leadership & Team Building Lead, mentor, and grow a high-performing ML engineering team, fostering a culture of innovation and continuous improvement. Define and execute the roadmap for building scalable, high-impact ML solutions that support the company's core business and strategic objectives. Collaborate closely with leadership across departments, including data science, product management, and IT, to ensure alignment of ML initiatives with business needs. Establish clear performance metrics and regularly assess the effectiveness of the team and individual contributors. Promote knowledge-sharing, best practices, and a growth mindset within the team to enhance technical depth and execution efficiency. 2. End-to-End ML Solution Development Oversee the design, development, deployment, and maintenance of machine learning models and systems that power content recommendation engines, personalization, automation, and other key business areas. Drive the adoption of best practices in model development, testing, monitoring, and optimization across the team. Build scalable, production-ready ML pipelines that process vast amounts of data and generate real-time insights. Ensure that solutions are optimized for performance, cost-efficiency, and maintainability in a cloud-native, microservices environment. Lead efforts to continuously improve model performance, incorporating user feedback and business metrics. 3. Innovation & Strategy Identify and evaluate emerging technologies, algorithms, and methodologies in the AI/ML space, integrating them into the company's tech stack to maintain a competitive edge. Work closely with senior leadership to define AI/ML strategies and influence decision-making on product and business development. Evaluate and implement advanced techniques in natural language processing (NLP), computer vision, deep learning, reinforcement learning, and other domains as relevant to the media industry. Provide thought leadership on the application of AI/ML in media and entertainment, positioning the company as a leader in AI-driven innovation. 4. Collaboration & Cross-Functional Engagement Partner with product management, engineering, and business teams to translate complex business problems into technical ML solutions. Collaborate on the integration of ML models into products and workflows, ensuring smooth end-to-end delivery from prototype to production. Act as a trusted advisor to executives and stakeholders on ML capabilities, project status, risks, and business impact. Drive the development and implementation of data governance, privacy, and security practices to ensure compliance with regulatory requirements. Facilitate the sharing of ML insights with broader company teams, providing transparency and fostering a data-driven culture. 5. Performance Monitoring & Reporting Define and track key performance indicators (KPIs) to measure the success of ML initiatives and models. Oversee the collection and analysis of model performance data, providing regular updates to leadership and stakeholders. Ensure that deployed models are continuously monitored, maintained, and updated to meet evolving business needs. Lead post-mortem analyses of model failures and actively drive improvements based on lessons learned. Utilize data to iterate and refine models to increase their accuracy and efficiency. Qualifications & Experiences: Master’s or Ph.D. degree in Computer Science, Engineering, Data Science, Machine Learning, or a related field from a reputed institution. 12+ years of experience in the field of machine learning and AI, with at least 5 years in a leadership or managerial role. Proven track record of successfully leading and scaling ML engineering teams and delivering large-scale ML projects in a fast-paced environment. Experience working in the Media & Entertainment industry or related sectors, with knowledge of data-driven content recommendations, personalization, and automation. Expertise in designing, building, and deploying production-grade machine learning systems at scale. Experience in leading cross-functional teams to deliver end-to-end machine learning solutions, from conceptualization to deployment and optimization. Demonstrated ability to influence senior stakeholders and executives, translating technical concepts into business impact. Strong expertise in machine learning algorithms, deep learning, reinforcement learning, and statistical modeling techniques. In-depth knowledge of data structures, software engineering principles, and system design. Experience with distributed computing and cloud technologies (AWS, GCP, Azure) and containerization (Docker, Kubernetes). Proficiency in programming languages such as Python, Java, or Scala, and familiarity with ML frameworks like TensorFlow, PyTorch, or Keras. How We Get Things Done… This last bit is probably the most important! Here at WBD, our guiding principles are the core values by which we operate and are central to how we get things done. You can find them at www.wbd.com/guiding-principles/ along with some insights from the team on what they mean and how they show up in their day to day. We hope they resonate with you and look forward to discussing them during your interview. Championing Inclusion at WBD Warner Bros. Discovery embraces the opportunity to build a workforce that reflects a wide array of perspectives, backgrounds and experiences. Being an equal opportunity employer means that we take seriously our responsibility to consider qualified candidates on the basis of merit, regardless of sex, gender identity, ethnicity, age, sexual orientation, religion or belief, marital status, pregnancy, parenthood, disability or any other category protected by law. If you’re a qualified candidate with a disability and you require adjustments or accommodations during the job application and/or recruitment process, please visit our accessibility page for instructions to submit your request.

Posted 3 days ago

Apply

2.0 years

8 - 9 Lacs

Hyderābād

On-site

JOB DESCRIPTION We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Software Engineer II at JPMorgan Chase within the Consumer and community banking- Data technology, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives. Job responsibilities Oversees for all aspects of data strategy, governance, data risk management, reporting and analytics. Work with product owners, data owners and customers to evaluate data requirements and identify right technology solutions and implementation. Design, develop, code, test, debug and deploy applications for scalable and extensible applications. Produce high quality code utilizing Test Driven Development techniques. Participate in retrospectives to drive continuous improvement within the feature team. Participating in code reviews and ensuring that all solutions are aligned to pre-defined architectural specifications. Implement Automation: Continuous Integration and Continuous Delivery. Manage Cloud Development and Deployment: Support development and deployment of applications into AWS Public clouds. Required qualifications, capabilities, and skills. Formal training or certification on software engineering concepts and 2+ years applied experience. Advanced knowledge of architecture, design and business processes Full Software Development Life Cycle experience within an Agile framework Experience with Java, AWS, Database technologies, Python, Scala, Spark and Snowflake. Experience with the development and decomposition of complex SQL (RDMS Platforms) & Experience leading cloud providers (AWS/Azure/GCP) Experience with Data Warehousing concepts (including Star Schema). Practical experience in delivering projects in Data and Analytics, Big Data, Data Warehousing, Business Intelligence. Familiar with relevant technological solutions and industry best practices. Good understanding of data engineering challenges and proven experience with data platform engineering (batch and streaming, ingestion, storage, processing, management, integration, consumption). Aware of various Data & Analytics tools and techniques (e.g. Python, data mining, predictive analytics, machine learning, data modelling, etc.) Preferred qualifications, capabilities, and skills Ability to work fast and quickly ramp up on new technologies and strategies. Ability to work collaboratively in teams and develop meaningful relationships to achieve common goals. Appreciation of Controls and Compliance processes for applications and data. In depth understanding of data technologies and solutions is preferable. Drive Process improvements and implement process changes as necessary. Knowledge of industry-wide Big Data technology trends and best practices ABOUT US

Posted 3 days ago

Apply

4.0 years

10 Lacs

Gurgaon

On-site

Expedia Group brands power global travel for everyone, everywhere. We design cutting-edge tech to make travel smoother and more memorable, and we create groundbreaking solutions for our partners. Our diverse, vibrant, and welcoming community is essential in driving our success. Why Join Us? To shape the future of travel, people must come first. Guided by our Values and Leadership Agreements, we foster an open culture where everyone belongs, differences are celebrated and know that when one of us wins, we all win. We provide a full benefits package, including exciting travel perks, generous time-off, parental leave, a flexible work model (with some pretty cool offices), and career development resources, all to fuel our employees' passion for travel and ensure a rewarding career journey. We’re building a more open world. Join us. Introduction to team : Are you fascinated by data and building robust data pipelines which process massive amounts of data at scale and speed to provide crucial insights to the end customer? This is exactly what we, the Supply and Partner Data Engineering (SPDE) group in Expedia, do. Our mission is 'transforming Expedia’s lodging data assets into Data Products that deliver intelligence and real-time insights for our customers'. We work on building data assets and products to support a variety of applications which are used by 1000+ market managers, analysts, and external hotel partners. We believe in being Different. We seek new ideas, different ways of thinking, diverse backgrounds and approaches, because averages can lie and sameness is dangerous. In this role, you will: You will work with a team of backend and data engineers to design and code large scale real-time data pipelines, microservices and APIs on the AWS platform. You will be accountable for individual tasks and assignments as well as your team's overall productivity. You will define, develop and maintain artifacts like technical design or user documentation and look for continuous improvement in software and development process within an agile development team You will communicate and work effectively with geographically distributed multi-functional teams. Experience and qualifications: Bachelor's or master's degree in a related technical field; or equivalent related professional experience You have min 4+ years of meaningful work experience undistributed computing and server side projects. You are comfortable programming in Scala, or Java and have hands-on experience in OOAD, design patterns, and SQL. You are passionate about learning, especially in the areas of micro-services, APIs and system architecture You have couple of years of experience in crafting real-time streaming applications, preferably in Spark, Kafka, or other streaming platforms Experience of using cloud services (e.g. AWS) You have experience working with Agile/Scrum methodologies. You are familiar with the e-commerce or travel industry. Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach out to our Recruiting Accommodations Team through the Accommodation Request. We are proud to be named as a Best Place to Work on Glassdoor in 2024 and be recognized for award-winning culture by organizations like Forbes, TIME, Disability:IN, and others. Expedia Group's family of brands includes: Brand Expedia®, Hotels.com®, Expedia® Partner Solutions, Vrbo®, trivago®, Orbitz®, Travelocity®, Hotwire®, Wotif®, ebookers®, CheapTickets®, Expedia Group™ Media Solutions, Expedia Local Expert®, CarRentals.com™, and Expedia Cruises™. © 2024 Expedia, Inc. All rights reserved. Trademarks and logos are the property of their respective owners. CST: 2029030-50 Employment opportunities and job offers at Expedia Group will always come from Expedia Group’s Talent Acquisition and hiring teams. Never provide sensitive, personal information to someone unless you’re confident who the recipient is. Expedia Group does not extend job offers via email or any other messaging tools to individuals with whom we have not made prior contact. Our email domain is @expediagroup.com. The official website to find and apply for job openings at Expedia Group is careers.expediagroup.com/jobs. Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age.

Posted 3 days ago

Apply

5.0 years

4 - 10 Lacs

Gurgaon

On-site

Expedia Group brands power global travel for everyone, everywhere. We design cutting-edge tech to make travel smoother and more memorable, and we create groundbreaking solutions for our partners. Our diverse, vibrant, and welcoming community is essential in driving our success. Why Join Us? To shape the future of travel, people must come first. Guided by our Values and Leadership Agreements, we foster an open culture where everyone belongs, differences are celebrated and know that when one of us wins, we all win. We provide a full benefits package, including exciting travel perks, generous time-off, parental leave, a flexible work model (with some pretty cool offices), and career development resources, all to fuel our employees' passion for travel and ensure a rewarding career journey. We’re building a more open world. Join us. Introduction to team Our Expedia Product & Technology division builds innovative products, services, and tools to deliver high-quality experiences for travellers, partners, and our employees. A unified, singular technology platform powered by data and machine learning provides secure, differentiated, and personalised experiences for the traveler and our partners that drive loyalty and customer satisfaction. The goal of Media Solutions(MeSo) Tech team is to spearhead advertising products and services across many Expedia brands including BEX, Hotels.com, Portfolio Brands (COMET & Hotwire.com), Expedia Partner Sites (EPS), and Vrbo. We help our advertisers identify travellers on EG sites, target specific traveller criteria, and then deliver the most relevant products. As a Senior Software Development Engineer, you will propose, design and implement various initiatives. As a member of the team, you will work in alliance with global teams providing the technical expertise needed to overcome hard problems. We value rigor and innovative thinking in our development process and believe in the power of a motivated & agile development team. What you’ll do: Join a high-performing team and have a unique opportunity to make a highly visible impact Learn best practices and how to constantly raise the bar in terms of engineering excellence Identify inefficiencies in code or systems operation and offer suggestions for improvements Expand your skills in developing high quality, distributed and scalable software Share new skills and knowledge with the team to increase efficiency Write code that is clean, maintainable, and optimized with good naming conventions Develop fast, scalable, and highly available services Participate in code reviews and pull requests Who you are: 5+ years software development work experience using modern languages i.e. Java/Kotlin Bachelor’s in computer science or related technical field; or equivalent related professional experience Experience in Java, Scala, Kotlin, AWS, Kafka, S3, Lambda, Docker, Datadog Problem solver with a good understanding of algorithms, data structures, and distributed applications Solid understanding of Object-Oriented Programming concepts, data structure, algorithms, and test-driven development Solid understanding of load balancing, caching, database partitioning, caching to improve application scalability Demonstrated ability to develop and support large-sized internet-scale software systems Experience in AWS Service Knowledge of No-SQL databases and cloud computing concepts Sound understanding of client-side optimization best practices Ability to quickly pick up new technologies, languages with ease Working knowledge of Agile Software Development methodologies Strong verbal and written communication skills Passionate about quality of work Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach out to our Recruiting Accommodations Team through the Accommodation Request. We are proud to be named as a Best Place to Work on Glassdoor in 2024 and be recognized for award-winning culture by organizations like Forbes, TIME, Disability:IN, and others. Expedia Group's family of brands includes: Brand Expedia®, Hotels.com®, Expedia® Partner Solutions, Vrbo®, trivago®, Orbitz®, Travelocity®, Hotwire®, Wotif®, ebookers®, CheapTickets®, Expedia Group™ Media Solutions, Expedia Local Expert®, CarRentals.com™, and Expedia Cruises™. © 2024 Expedia, Inc. All rights reserved. Trademarks and logos are the property of their respective owners. CST: 2029030-50 Employment opportunities and job offers at Expedia Group will always come from Expedia Group’s Talent Acquisition and hiring teams. Never provide sensitive, personal information to someone unless you’re confident who the recipient is. Expedia Group does not extend job offers via email or any other messaging tools to individuals with whom we have not made prior contact. Our email domain is @expediagroup.com. The official website to find and apply for job openings at Expedia Group is careers.expediagroup.com/jobs. Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age.

Posted 3 days ago

Apply

0 years

0 Lacs

India

Remote

Role: NIFI Developer Notice period: Notice Serving Candidates or Immediate Joiners Preferred Client: Marriott Payroll: Dminds Work Mode: Remote I nterview Mode: Virtual We’re looking for someone who has built deployed and maintained NIFI clusters. Roles & Responsibilities: ·Implemented solutions utilizing Advanced AWS Components: EMR, EC2, etc integrated with Big Data/Hadoop Distribution Frameworks: Zookeeper, Yarn, Spark, Scala, NiFi etc. ·Designed and Implemented Spark Jobs to be deployed and run on existing Active clusters. ·Configured Postgres Database on EC2 instances and made sure application that was created is up and running, Trouble Shooted issues to meet the desired application state. ·Experience in creating and configuring secure VPC, Subnets, and Security Groups through private and public networks. ·Created alarms, alerts, notifications for Spark Jobs to email and slack group message job status and log in CloudWatch. ·NiFi data Pipeline to process large set of data and configured Lookup’s for Data Validation and Integrity. ·generation large set of test data with data integrity using java which used in Development and QA Phase. ·Spark Scala, improving the performance and optimized of the existing applications running on EMR cluster. ·Spark Job to Convert CSV data to Custom HL7/FHIR objects using FHIR API’s. ·Deployed SNS, SQS, Lambda function, IAM Roles, Custom Policies, EMR with Spark and Hadoop setup and bootstrap scripts to setup additional software’s needed to perform the job in QA and Production Environment using Terraform Scripts. ·Spark Job to perform Change Data Capture (CDC) on Postgres Tables and updated target tables using JDBC properties. ·Kafka Publisher integrated in spark job to capture errors from Spark Application and push into Postgres table. ·extensively on building Nifi data pipelines in docker container environment in development phase. ·Devops team to Clusterize NIFI Pipeline on EC2 nodes integrated with Spark, Kafka, Postgres running on other instances using SSL handshakes in QA and Production Environments.

Posted 3 days ago

Apply

0 years

0 Lacs

Ghaziabad, Uttar Pradesh, India

Remote

Job Description Position: Data Engineer Intern Location: Remote Duration: 2-6 months Company: Collegepur Type: Unpaid Internship About the Internship: We are seeking a skilled Data Engineer to join our team, with a focus on cloud data storage, ETL processes, and database/data warehouse management. If you are passionate about building robust data solutions and enabling data-driven decision-making, we want to hear from you! Key Responsibilities: 1. Design, develop, and maintain scalable data pipelines to process large datasets from multiple sources, both structured and unstructured. 2. Implement and optimize ETL (Extract, Transform, Load) processes to integrate, clean, and transform data for analytical use. 3. Manage and enhance cloud-based data storage solutions, including data lakes and data warehouses, using platforms such as AWS, Azure, or Google Cloud. 4. Ensure data security, privacy, and compliance with relevant standards and regulations. 5. Collaborate with data scientists, analysts, and software engineers to support data-driven projects and business processes. 6. Monitor and troubleshoot data pipelines to ensure efficient, real-time, and batch data processing. 7. Maintain comprehensive documentation and data mapping across multiple systems. Requirements: 1. Proven experience with cloud platforms (AWS, Azure, or Google Cloud). 2. Strong knowledge of database systems, data warehousing, and data modeling. 3. Proficiency in programming languages such as Python, Java, or Scala. 4. Experience with ETL tools and frameworks (e.g., Airflow, Informatica, Talend). 5. Familiarity with data security, compliance, and governance practices. 6. Excellent analytical, problem-solving, and communication skills. 7. Bachelor’s degree in Computer Science, Information Technology, or related field.

Posted 3 days ago

Apply

15.0 years

0 Lacs

Calcutta

On-site

Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Apache Spark Good to have skills : Java, Scala, PySpark Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary: As a Data Platform Engineer, you will assist with the data platform blueprint and design, encompassing the relevant data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, while also engaging in discussions to refine and enhance the data architecture. You will be involved in analyzing requirements, proposing solutions, and ensuring that the data platform aligns with organizational goals and standards. Your role will require you to stay updated with industry trends and best practices to contribute effectively to the team. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Engage in continuous learning to stay abreast of emerging technologies and methodologies. - Collaborate with cross-functional teams to gather requirements and translate them into technical specifications. Professional & Technical Skills: - Must To Have Skills: Proficiency in Apache Spark. - Good To Have Skills: Experience with Java, Scala, PySpark. - Strong understanding of data processing frameworks and distributed computing. - Experience with data integration tools and techniques. - Familiarity with cloud platforms and services related to data engineering. Additional Information: - The candidate should have minimum 3 years of experience in Apache Spark. - This position is based at our Kolkata office. - A 15 years full time education is required. 15 years full time education

Posted 3 days ago

Apply

4.0 years

0 Lacs

Kochi, Kerala, India

On-site

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc. Preferred Education Master's Degree Required Technical And Professional Expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Preferred Technical And Professional Experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Knowledge or experience of Snowflake will be an added advantage

Posted 3 days ago

Apply

10.0 years

0 Lacs

Kanayannur, Kerala, India

On-site

At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. EY GDS – Data and Analytics (D&A) – Cloud Architect - Manager As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We’re looking for SeniorManagers (GTM +Cloud/ Big Data Architects) with strong technology and data understanding having proven delivery capability in delivery and pre sales. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team. Your Key Responsibilities Have proven experience in driving Analytics GTM/Pre-Sales by collaborating with senior stakeholder/s in the client and partner organization in BCM, WAM, Insurance. Activities will include pipeline building, RFP responses, creating new solutions and offerings, conducting workshops as well as managing in flight projects focused on cloud and big data. Need to work with client in converting business problems/challenges to technical solutions considering security, performance, scalability etc. [ 10- 15 years] Need to understand current & Future state enterprise architecture. Need to contribute in various technical streams during implementation of the project. Provide product and design level technical best practices Interact with senior client technology leaders, understand their business goals, create, architect, propose, develop and deliver technology solutions Define and develop client specific best practices around data management within a Hadoop environment or cloud environment Recommend design alternatives for data ingestion, processing and provisioning layers Design and develop data ingestion programs to process large data sets in Batch mode using HIVE, Pig and Sqoop, Spark Develop data ingestion programs to ingest real-time data from LIVE sources using Apache Kafka, Spark Streaming and related technologies Skills And Attributes For Success Architect in designing highly scalable solutions Azure, AWS and GCP. Strong understanding & familiarity with all Azure/AWS/GCP /Bigdata Ecosystem components Strong understanding of underlying Azure/AWS/GCP Architectural concepts and distributed computing paradigms Hands-on programming experience in Apache Spark using Python/Scala and Spark Streaming Hands on experience with major components like cloud ETLs,Spark, Databricks Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Knowledge of Spark and Kafka integration with multiple Spark jobs to consume messages from multiple Kafka partitions Solid understanding of ETL methodologies in a multi-tiered stack, integrating with Big Data systems like Cloudera and Databricks. Strong understanding of underlying Hadoop Architectural concepts and distributed computing paradigms Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Good knowledge in apache Kafka & Apache Flume Experience in Enterprise grade solution implementations. Experience in performance bench marking enterprise applications Experience in Data security [on the move, at rest] Strong UNIX operating system concepts and shell scripting knowledge To qualify for the role, you must have Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Ability to multi-task under pressure and work independently with minimal supervision. Strong verbal and written communication skills. Must be a team player and enjoy working in a cooperative and collaborative team environment. Adaptable to new technologies and standards. Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support Responsible for the evaluation of technical risks and map out mitigation strategies Experience in Data security [on the move, at rest] Experience in performance bench marking enterprise applications Working knowledge in any of the cloud platform, AWS or Azure or GCP Excellent business communication, Consulting, Quality process skills Excellent Consulting Skills Excellence in leading Solution Architecture, Design, Build and Execute for leading clients in Banking, Wealth Asset Management, or Insurance domain. Minimum 7 years hand-on experience in one or more of the above areas. Minimum 10 years industry experience Ideally, you’ll also have Strong project management skills Client management skills Solutioning skills What We Look For People with technical experience and enthusiasm to learn new things in this fast-moving environment What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.

Posted 3 days ago

Apply

10.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. EY GDS – Data and Analytics (D&A) – Cloud Architect - Manager As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We’re looking for SeniorManagers (GTM +Cloud/ Big Data Architects) with strong technology and data understanding having proven delivery capability in delivery and pre sales. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team. Your Key Responsibilities Have proven experience in driving Analytics GTM/Pre-Sales by collaborating with senior stakeholder/s in the client and partner organization in BCM, WAM, Insurance. Activities will include pipeline building, RFP responses, creating new solutions and offerings, conducting workshops as well as managing in flight projects focused on cloud and big data. Need to work with client in converting business problems/challenges to technical solutions considering security, performance, scalability etc. [ 10- 15 years] Need to understand current & Future state enterprise architecture. Need to contribute in various technical streams during implementation of the project. Provide product and design level technical best practices Interact with senior client technology leaders, understand their business goals, create, architect, propose, develop and deliver technology solutions Define and develop client specific best practices around data management within a Hadoop environment or cloud environment Recommend design alternatives for data ingestion, processing and provisioning layers Design and develop data ingestion programs to process large data sets in Batch mode using HIVE, Pig and Sqoop, Spark Develop data ingestion programs to ingest real-time data from LIVE sources using Apache Kafka, Spark Streaming and related technologies Skills And Attributes For Success Architect in designing highly scalable solutions Azure, AWS and GCP. Strong understanding & familiarity with all Azure/AWS/GCP /Bigdata Ecosystem components Strong understanding of underlying Azure/AWS/GCP Architectural concepts and distributed computing paradigms Hands-on programming experience in Apache Spark using Python/Scala and Spark Streaming Hands on experience with major components like cloud ETLs,Spark, Databricks Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Knowledge of Spark and Kafka integration with multiple Spark jobs to consume messages from multiple Kafka partitions Solid understanding of ETL methodologies in a multi-tiered stack, integrating with Big Data systems like Cloudera and Databricks. Strong understanding of underlying Hadoop Architectural concepts and distributed computing paradigms Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Good knowledge in apache Kafka & Apache Flume Experience in Enterprise grade solution implementations. Experience in performance bench marking enterprise applications Experience in Data security [on the move, at rest] Strong UNIX operating system concepts and shell scripting knowledge To qualify for the role, you must have Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Ability to multi-task under pressure and work independently with minimal supervision. Strong verbal and written communication skills. Must be a team player and enjoy working in a cooperative and collaborative team environment. Adaptable to new technologies and standards. Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support Responsible for the evaluation of technical risks and map out mitigation strategies Experience in Data security [on the move, at rest] Experience in performance bench marking enterprise applications Working knowledge in any of the cloud platform, AWS or Azure or GCP Excellent business communication, Consulting, Quality process skills Excellent Consulting Skills Excellence in leading Solution Architecture, Design, Build and Execute for leading clients in Banking, Wealth Asset Management, or Insurance domain. Minimum 7 years hand-on experience in one or more of the above areas. Minimum 10 years industry experience Ideally, you’ll also have Strong project management skills Client management skills Solutioning skills What We Look For People with technical experience and enthusiasm to learn new things in this fast-moving environment What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.

Posted 3 days ago

Apply

0 years

0 Lacs

India

On-site

Job Description: We are seeking a highly skilled 4+ Azure Data Engineer to design, develop, and optimize data pipelines and data integration solutions in a cloud-based environment. The ideal candidate will have strong technical expertise in Azure, Data Engineering tools, and advanced ETL design along with excellent communication and problem-solving skills. Key Responsibilities: Design and develop advanced ETL pipelines for data ingestion and egress for batch data. Build scalable data solutions using Azure Data Factory (ADF) , Databricks , Spark (PySpark & Scala Spark) , and other Azure services. Troubleshoot data jobs, identify issues, and implement effective root cause solutions. Collaborate with stakeholders to gather requirements and propose efficient solution designs. Ensure data quality, reliability, and adherence to best practices in data engineering. Maintain detailed documentation of problem definitions, solutions, and architecture. Work independently with minimal supervision while ensuring project deadlines are met. Required Skills & Qualifications: Microsoft Certified: Azure Fundamentals (preferred). Microsoft Certified: Azure Data Engineer Associate (preferred). Proficiency in SQL , Python , and Scala . Strong knowledge of Azure Cloud services , ADF , and Databricks . Hands-on experience with Apache Spark (PySpark & Scala Spark). Expertise in designing and implementing complex ETL pipelines for batch data. Strong troubleshooting skills with the ability to perform root cause analysis. Soft Skills: Excellent verbal and written communication skills. Strong documentation skills for drafting problem definitions and solutions. Ability to effectively gather requirements and propose solution designs. Self-driven with the ability to work independently with minimal supervision.

Posted 3 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies