Jobs
Interviews

8236 Hadoop Jobs - Page 6

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

3 - 6 Lacs

Gurgaon

On-site

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title and Summary Consultant, Advisors & Consulting Services, Performance Analytics Consultant – Performance Analytics Advisors & Consulting Services Services within Mastercard is responsible for acquiring, engaging, and retaining customers by managing fraud and risk, enhancing cybersecurity, and improving the digital payments experience. We provide value-added services and leverage expertise, data-driven insights, and execution. Our Advisors & Consulting Services team combines traditional management consulting with Mastercard’s rich data assets, proprietary platforms, and technologies to provide clients with powerful strategic insights and recommendations. Our teams work with a diverse global customer base across industries, from banking and payments to retail and restaurants. The Advisors & Consulting Services group has five specializations: Strategy & Transformation, Performance Analytics, Business Experimentation, Marketing, and Program Management. Our Performance Analytics consultants translate data into insights by leveraging Mastercard and customer data to design, implement, and scale analytical solutions for customers. They use qualitative and quantitative analytical techniques and enterprise applications to synthesize analyses into clear recommendations and impactful narratives. Positions for different specializations and levels are available in separate job postings. Please review our consulting specializations to learn more about all opportunities and apply for the position that is best suited to your background and experience: https://careers.mastercard.com/us/en/consulting-specializations-at-mastercard Roles and Responsibilities Client Impact Provide creative input on projects across a range of industries and problem statements Contribute to the development of analytics strategies and programs for regional and global clients by leveraging data and technology solutions to unlock client value Collaborate with Mastercard team to understand clients’ needs, agenda, and risks Develop working relationship with client analysts/managers, and act as trusted and reliable partner Team Collaboration & Culture Collaborate with senior project delivery consultants to identify key findings, prepare effective presentations, and deliver recommendations to clients Independently identify trends, patterns, issues, and anomalies in defined area of analysis, and structure and synthesize own analysis to highlight relevant findings Lead internal and client meetings, and contribute to project management Contribute to the firm's intellectual capital Receive mentorship from performance analytics leaders for professional growth and development Qualifications Basic qualifications Undergraduate degree with data and analytics experience in business intelligence and/or descriptive, predictive, or prescriptive analytics Experience managing clients or internal stakeholders Ability to analyze large datasets and synthesize key findings Proficiency using data analytics software (e.g., Python, R, SQL, SAS) Advanced Word, Excel, and PowerPoint skills Ability to perform multiple tasks with multiple clients in a fast-paced, deadline-driven environment Ability to communicate effectively in English and the local office language (if applicable) Eligibility to work in the country where you are applying, as well as apply for travel visas as required by travel needs Preferred qualifications Additional data and analytics experience in building, managing, and maintaining database structures, working with data visualization tools (e.g., Tableau, Power BI), or working with Hadoop framework and coding using Impala, Hive, or PySpark Ability to analyze large datasets and synthesize key findings to provide recommendations via descriptive analytics and business intelligence Experience managing tasks or workstreams in a collaborative team environment Ability to identify problems, brainstorm and analyze answers, and implement the best solutions Relevant industry expertise Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.

Posted 2 days ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Reference # 321622BR Job Type Full Time Your role Are you an analytical thinker with experience in big data? Do you excel at developing innovative solutions? We are looking for a Data Developer with practical knowledge of Python and expertise in Semantic Web technologies. You will: Design, prototype, build, and maintain new data pipelines features on our data platform, as well as support existing ones through debugging and optimization. Implement quality assurance and data quality checks to ensure the completeness, validity, consistency, and integrity of data as it flows through the pipeline. Collaborate closely with a global team of researchers, engineers, and business analysts to build innovative data solutions. Your team You will be part of a nimble, multi-disciplinary Data Architecture team within Group CTO, collaborating closely with specialists across various areas of Group Technology. Our team provides the foundation for data-driven technology management, facilitating processes from strategic and architecture planning to demand management, development, and deployment. The team is globally distributed, with members primarily based in Switzerland, the UK, and the US. Your expertise You have: 10+ years of Proven track record in hands-on development and design of data platforms, with a strong emphasis on data ingestion and integration. Interest in linked data and Semantic Web technologies as enablers for data science and machine learning. Strong command of application, data, and infrastructure architecture disciplines. Experience working in agile, delivery-oriented teams. Desired: University degree, preferably in a technical or quantitative field such as statistics, computer science, or mathematics. Strong command of Python; proficiency in other languages (e.g., C++, Java) is desirable. Strong understanding of data and databases (SQL, NoSQL, TripleStores, Hadoop, etc.). Experience with efficient processing of large datasets in a production system. Understanding of data structures and data manipulation techniques, including classification, parsing, and pattern matching. Experience with Machine Learning and Artificial Intelligence is a plus. You are: Willing to take full ownership of problems and code, with the ability to hit the ground running and deliver exceptional solutions. A strong problem solver who anticipates issues and resolves them proactively. Skilled in communicating effectively with both technical and non-technical audiences. About Us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors. We have a presence in all major financial centers in more than 50 countries. How We Hire We may request you to complete one or more assessments during the application process. Learn more Join us At UBS, we know that it's our people, with their diverse skills, experiences and backgrounds, who drive our ongoing success. We’re dedicated to our craft and passionate about putting our people first, with new challenges, a supportive team, opportunities to grow and flexible working options when possible. Our inclusive culture brings out the best in our employees, wherever they are on their career journey. We also recognize that great work is never done alone. That’s why collaboration is at the heart of everything we do. Because together, we’re more than ourselves. We’re committed to disability inclusion and if you need reasonable accommodation/adjustments throughout our recruitment process, you can always contact us. Disclaimer / Policy Statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce.

Posted 2 days ago

Apply

6.0 years

6 - 9 Lacs

Chennai

On-site

Job Description: About Us At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities, and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence, and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview* GBS Apps team is part of CBWT & GBS Application technology under Global Business Services. This team manages end to end application design, development, governance, and management. Key capabilities include portfolio management, Application development & testing, Quality assurance, reporting, application management & governance & production support. Job Description* The Full stack developer is responsible for designing, developing, and maintaining web applications across the entire stack, utilizing Angular for the frontend, .NET Core for the backend and SQL Server for database management. This role involves collaborating with cross-functional teams to deliver robust, scalable, and high-performance solutions. Responsibilities* Develop and maintain responsive and user-friendly web interface using Angular, HTML, CSS, and TypeScript. Design, develop and implement backend services and APIs using .NET Core (C#) Manage and optimize SQL Server databases including schema design, query optimization, stored procedures, and data integrity. Leverage GitHub Copilot to enhance code quality and development efficiency through AI-powered code suggestions and automations. Collaborate with product managers, UI/UX designers and other developers to translate business requirements into technical specifications and solutions. Implement and integrate Restful API’s and ensure secure and efficient data exchange between frontend and backend systems. Participate in code reviews, enforce coding standards, and ensure high code quality and maintainability. Troubleshoot debug and resolve technical issues in existing applications. Contribute to the entire software development lifecycle from conceptualization and design to deployment and maintenance. Stay updated with emerging technologies and industry best practices. Perform unit, integration, and functional testing to ensure application reliability and quality. Develop and maintain CI/CD pipelines to ensure smooth code integration, automated testing, and efficient deployment processes. Manage the entire software development lifecycle, from requirement analysis and design to development, testing and deployment. Requirements* Proficiency in Angular including experience in component-based architecture, routing, and states management. Strong expertise in .NET Core, C#, ASP.NET Core and developing restful API’s. Extensive experience with SQL Server including writing complex queries, stored procedures, and database optimization techniques. Solid understanding of object-oriented programming principles and design patterns. Experience with version control systems such as Git. Familiarity with Agile development methodologies. Excellent problem-solving, analytical and communication skills. Education* Graduation / Post Graduation: B-Tech/ BE/MCA. Certifications If Any: NA Experience Range* 06 Years To 10 Years. Foundational Skills* Front-end Technologies: Angular, HTML/CSS, JavaScript, TypeScript. Back-end technologies: C#, .NET Core, ASP.NET Core, Restful API’s and Microservices Architecture. Database Management: SQL Server. Other Essential Skills: Version Control, Problem solving skills, Web Architecture, Agile Methodology, Design Patterns, and testing. Desired Skills* Hadoop and HDFS knowledge. Python. Work Timings* 10:30 AM to 07:30 PM. Job Location* Chennai.

Posted 2 days ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

LiveRamp is the data collaboration platform of choice for the world’s most innovative companies. A groundbreaking leader in consumer privacy, data ethics, and foundational identity, LiveRamp is setting the new standard for building a connected customer view with unmatched clarity and context while protecting precious brand and consumer trust. LiveRamp offers complete flexibility to collaborate wherever data lives to support the widest range of data collaboration use cases—within organizations, between brands, and across its premier global network of top-quality partners. Hundreds of global innovators, from iconic consumer brands and tech giants to banks, retailers, and healthcare leaders turn to LiveRamp to build enduring brand and business value by deepening customer engagement and loyalty, activating new partnerships, and maximizing the value of their first-party data while staying on the forefront of rapidly evolving compliance and privacy requirements. LiveRamp is looking for a Staff Backend Engineer to join our team and help build the Unified Segment Builder (USB) — the next-generation, comprehensive segmentation solution for creating precise, real-time, and meaningful audiences. USB is a foundational pillar in LiveRamp’s product ecosystem. It empowers customers to create powerful audience segments using 1st, 2nd, and 3rd-party data, with support for combining, excluding, and overlapping datasets. The solution is designed for scale, performance, and usability — replacing legacy segmentation tools and delivering a unified, world-class user experience. We are also rolling out AI-powered segment building capabilities based on USB, aiming to boost efficiency and expand the use cases beyond traditional campaign planners. You Will Collaborate with APAC engineers, and partner closely with US-based product and UX teams. Design and implement scalable backend systems, APIs, and infrastructure powering the USB and other core LiveRamp products. Lead cross-functional technical discussions, drive architectural decisions, and evangelize engineering best practices across teams. Mentor engineers and contribute to the technical leadership of the local team. Ensure operational excellence by building reliable, observable, and maintainable production systems. Help rearchitect our existing systems to provide a more powerful and flexible data processing environment at scale. Your Team Will Design, build, and scale USB and related segment-building products critical to LiveRamp’s success. Collaborate with engineering, product, DevOps, SRE, and QA teams to deliver new features and improvements. Build systems that integrate with the broader LiveRamp Data Collaboration Platform. Continuously improve quality, performance, and developer experience for internal tools and services About You 8+ years of experience writing and deploying production-grade backend code. Strong programming skills in Java, Python, kotlin, or Go. 3+ years of experience working with big data technologies such as Apache Spark, Hadoop/MapReduce, and Kafka. Extensive experience with containerization and orchestration technologies, including Docker and Kubernetes, for building and managing scalable, reliable services Proven experience designing and delivering large-scale distributed systems in production environments. Strong track record of contributing to or leading architectural efforts for complex systems. Hands-on experience with cloud platforms, ideally GCP (AWS or Azure also acceptable). Proficiency with Spring Boot and modern backend frameworks. Experience working with distributed databases (e.g., SingleStore, ClickHouse, etc.). Bonus Points Familiarity with building AI-enabled applications, especially those involving LLMs or generative AI workflows. Experience with LangChain or LangGraph frameworks for orchestrating multi-step AI agents is a strong plus. Benefits Flexible paid time off, paid holidays, options for working from home, and paid parental leave. Comprehensive Benefits Package: LiveRamp offers a comprehensive benefits package designed to help you be your best self in your personal and professional lives. Our benefits package offers medical, dental, vision, accident, life and disability, an employee assistance program, voluntary benefits as well as perks programs for your healthy lifestyle, career growth, and more. Your medical benefits extend to your dependents including parents. More About Us LiveRamp’s mission is to connect data in ways that matter, and doing so starts with our people. We know that inspired teams enlist people from a blend of backgrounds and experiences. And we know that individuals do their best when they not only bring their full selves to work but feel like they truly belong. Connecting LiveRampers to new ideas and one another is one of our guiding principles—one that informs how we hire, train, and grow our global team across nine countries and four continents. Click here to learn more about Diversity, Inclusion, & Belonging (DIB) at LiveRamp. To all recruitment agencies : LiveRamp does not accept agency resumes. Please do not forward resumes to our jobs alias, LiveRamp employees or any other company location. LiveRamp is not responsible for any fees related to unsolicited resumes.

Posted 2 days ago

Apply

7.0 years

0 Lacs

Calcutta

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Responsibilities: Job Description: · Analyses current business practices, processes, and procedures as well as identifying future business opportunities for leveraging Microsoft Azure Data & Analytics Services. · Provide technical leadership and thought leadership as a senior member of the Analytics Practice in areas such as data access & ingestion, data processing, data integration, data modeling, database design & implementation, data visualization, and advanced analytics. · Engage and collaborate with customers to understand business requirements/use cases and translate them into detailed technical specifications. · Develop best practices including reusable code, libraries, patterns, and consumable frameworks for cloud-based data warehousing and ETL. · Maintain best practice standards for the development or cloud-based data warehouse solutioning including naming standards. · Designing and implementing highly performant data pipelines from multiple sources using Apache Spark and/or Azure Databricks · Integrating the end-to-end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained · Working with other members of the project team to support delivery of additional project components (API interfaces) · Evaluating the performance and applicability of multiple tools against customer requirements · Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. · Integrate Databricks with other technologies (Ingestion tools, Visualization tools). · Proven experience working as a data engineer · Highly proficient in using the spark framework (python and/or Scala) · Extensive knowledge of Data Warehousing concepts, strategies, methodologies. · Direct experience of building data pipelines using Azure Data Factory and Apache Spark (preferably in Databricks). · Hands on experience designing and delivering solutions using Azure including Azure Storage, Azure SQL Data Warehouse, Azure Data Lake, Azure Cosmos DB, Azure Stream Analytics · Experience in designing and hands-on development in cloud-based analytics solutions. · Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required. · Designing and building of data pipelines using API ingestion and Streaming ingestion methods. · Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential. · Thorough understanding of Azure Cloud Infrastructure offerings. · Strong experience in common data warehouse modelling principles including Kimball. · Working knowledge of Python is desirable · Experience developing security models. · Databricks & Azure Big Data Architecture Certification would be plus · Must be team oriented with strong collaboration, prioritization, and adaptability skills required Mandatory skill sets: Azure Databricks Preferred skill sets: Azure Databricks Years of experience required: 7-10 Years Education qualification: BE, B.Tech, MCA, M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Technology, Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Databricks Platform Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Coaching and Feedback, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling {+ 32 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date

Posted 2 days ago

Apply

2.0 years

3 - 10 Lacs

India

Remote

Job Title - Sr. Data Engineer Experience - 2+ Years Location - Indpre (onsite) Industry - IT Job Type - Full ime Roles and Responsibilities- 1. Design and develop scalable data pipelines and workflows for data ingestion, transformation, and integration. 2. Build and maintain data storage systems, including data warehouses, data lakes, and relational databases. 3. Ensure data accuracy, integrity, and consistency through validation and quality assurance processes. 4. Collaborate with data scientists, analysts, and business teams to understand data needs and deliver tailored solutions. 5. Optimize database performance and manage large-scale datasets for efficient processing. 6. Leverage cloud platforms (AWS, Azure, or GCP) and big data technologies (Hadoop, Spark, Kafka) for building robust data solutions. 7. Automate and monitor data workflows using orchestration frameworks such as Apache Airflow. 8. Implement and enforce data governance policies to ensure compliance and data security. 9. Troubleshoot and resolve data-related issues to maintain seamless operations. 10. Stay updated on emerging tools, technologies, and trends in data engineering. Skills and Knowledge- 1. Core Skills: ● Proficient in Python (libraries: Pandas, NumPy) and SQL. ● Knowledge of data modeling techniques, including: ○ Entity-Relationship (ER) Diagrams ○ Dimensional Modeling ○ Data Normalization ● Familiarity with ETL processes and tools like: ○ Azure Data Factory (ADF) ○ SSIS (SQL Server Integration Services) 2. Cloud Expertise: ● AWS Services: Glue, Redshift, Lambda, EKS, RDS, Athena ● Azure Services: Databricks, Key Vault, ADLS Gen2, ADF, Azure SQL ● Snowflake 3. Big Data and Workflow Automation: ● Hands-on experience with big data technologies like Hadoop, Spark, and Kafka. ● Experience with workflow automation tools like Apache Airflow (or similar). Qualifications and Requirements- ● Education: ○ Bachelor’s degree (or equivalent) in Computer Science, Information Technology, Engineering, or a related field. ● Experience: ○ Freshers with strong understanding, internships and relevant academic projects are welcome. ○ 2+ years of experience working with Python, SQL, and data integration or visualization tools is preferred. ● Other Skills: ○ Strong communication skills, especially the ability to explain technical concepts to non-technical stakeholders. ○ Ability to work in a dynamic, research-oriented team with concurrent projects. Job Types: Full-time, Permanent Pay: ₹300,000.00 - ₹1,000,000.00 per year Benefits: Paid sick time Provident Fund Work from home Schedule: Day shift Monday to Friday Weekend availability Supplemental Pay: Performance bonus Ability to commute/relocate: Niranjanpur, Indore, Madhya Pradesh: Reliably commute or planning to relocate before starting work (Preferred) Experience: Data Engineer: 2 years (Preferred) Work Location: In person Application Deadline: 31/08/2025

Posted 2 days ago

Apply

0 years

0 Lacs

India

Remote

Key Responsibilities ● Monitor production systems and job pipelines; respond promptly to alerts and anomalies ● Troubleshoot operational issues in collaboration with the development team ● Investigate incidents using logs, metrics, and observability tools (e.g., Grafana, Kibana) ● Perform recovery actions such as restarting pods, rerunning jobs, or applying known mitigations ● Operate in Kubernetes environments to inspect, debug, and manage components ● Support deployment activities through post-release validations and basic checks ● Validate data quality and flag anomalies to the relevant engineering teams ● Maintain clear documentation of incidents, actions taken, and resolution outcomes ● Communicate effectively with remote teams for operational handoffs and follow-ups Required Qualifications ● Experience in production operations, system support, or devops roles ● Solid Linux skills (e.g., file system navigation, log analysis, process/network troubleshooting) ● Hands-on experience with Kubernetes and Docker in production environments ● Familiarity with observability tools (e.g., Grafana, Kibana, Prometheus) ● English proficiency for reading, writing, and asynchronous communication ● Strong execution discipline and ability to follow structured operational procedures Preferred Qualifications ● Scripting ability (Python or Shell) for log parsing and automation ● Basic SQL skills for data verification or debugging ● Experience with Hadoop and Flink pipelines for batch and stream processing is a strong plus ● Experience with large-scale distributed data systems or job scheduling frameworks

Posted 2 days ago

Apply

10.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Description Join us and drive the design and deployment of AI/ML frameworks revolutionizing telecom services. As a key member of our team, you will architect and build scalable, secure AI systems for service assurance, orchestration, and fulfillment, working directly with network experts to drive business impact. You will be responsible for defining architecture blueprints, selecting the right tools and platforms, and guiding cross-functional teams to deliver scalable AI systems. This role offers significant growth potential, mentorship opportunities, and the chance to shape the future of telecoms using the latest AI technologies and platforms. Key Responsibilities HOW YOU WILL CONTRIBUTE AND WHAT YOU WILL LEARN Design end-to-end AI architecture tailored to telecom services business functions (e.g., Service assurance, Orchestration and Fulfilment). Define data strategy and AI workflows including Inventory Model, ETL, model training, deployment, and monitoring. Evaluate and select AI platforms, tools, and frameworks suited for telecom-scale workloads for development and testing of Inventory services solutions Work closely with telecom network experts and Architects to align AI initiatives with business goals. Ensure scalability, performance, and security in AI systems across hybrid/multi-cloud environments. Mentor AI developers Key Skills And Experience You have: 10+ years' experience in AI/ML design and deployment with a Graduation or equivalent degree. Practical Experience on AI/ML techniques and scalable architecture design for telecom operations, inventory management, and ETL. Exposure to data platforms (Kafka, Spark, Hadoop), model orchestration (Kubeflow, MLflow), and cloud-native deployment (AWS Sagemaker, Azure ML). Proficient in programming (Python, Java) and DevOps/MLOps best practices. It will be nice if you had: Worked with any of the LLM models (llama family) and LLM agent frameworks like LangChain / CrewAI / AutoGen Familiarity with telecom protocols, OSS/BSS platforms, 5G architecture, and NFV/SDN concepts. Excellent communication and stakeholder management skills. About Us Come create the technology that helps the world act together Nokia is committed to innovation and technology leadership across mobile, fixed and cloud networks. Your career here will have a positive impact on people’s lives and will help us build the capabilities needed for a more productive, sustainable, and inclusive world. We challenge ourselves to create an inclusive way of working where we are open to new ideas, empowered to take risks and fearless to bring our authentic selves to work What we offer Nokia offers continuous learning opportunities, well-being programs to support you mentally and physically, opportunities to join and get supported by employee resource groups, mentoring programs and highly diverse teams with an inclusive culture where people thrive and are empowered. Nokia is committed to inclusion and is an equal opportunity employer Nokia has received the following recognitions for its commitment to inclusion & equality: One of the World’s Most Ethical Companies by Ethisphere Gender-Equality Index by Bloomberg Workplace Pride Global Benchmark At Nokia, we act inclusively and respect the uniqueness of people. Nokia’s employment decisions are made regardless of race, color, national or ethnic origin, religion, gender, sexual orientation, gender identity or expression, age, marital status, disability, protected veteran status or other characteristics protected by law. We are committed to a culture of inclusion built upon our core value of respect. Join us and be part of a company where you will feel included and empowered to succeed. About The Team As Nokia's growth engine, we create value for communication service providers and enterprise customers by leading the transition to cloud-native software and as-a-service delivery models. Our inclusive team of dreamers, doers and disruptors push the limits from impossible to possible.

Posted 2 days ago

Apply

10.0 - 12.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

TCS present an excellent opportunity for Data architect Job Description: Skills: AWS, Glue, Redshift, PySpark Location: Pune / Kolkata Experience: 10 to 12 Years Strong hands-on experience in Python programming and PySpark. Experience using AWS services (RedShift, Glue, EMR, S3 & Lambda) Experience working with Apache Spark and Hadoop ecosystem. Experience in writing and optimizing SQL for data manipulations. Good Exposure to scheduling tools. Airflow is preferable. Must – Have Data Warehouse Experience with AWS Redshift or Hive. Experience in implementing security measures for data protection. Expertise to build/test complex data pipelines for ETL processes (batch and near real time) Readable documentation of all the components being developed. Knowledge of Database technologies for OLTP and OLAP workloads.

Posted 2 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Python PySpark ETL Data Pipeline Big Data AWS GCP Azure Data Warehousing Spark Hadoop A day in the life of an Infoscion As part of the Infosys consulting team, your primary role would be to get to the heart of customer issues, diagnose problem areas, design innovative solutions and facilitate deployment resulting in client delight. You will develop a proposal by owning parts of the proposal document and by giving inputs in solution design based on areas of expertise. You will plan the activities of configuration, configure the product as per the design, conduct conference room pilots and will assist in resolving any queries related to requirements and solution design You will conduct solution/product demonstrations, POC/Proof of Technology workshops and prepare effort estimates which suit the customer budgetary requirements and are in line with organization’s financial guidelines Actively lead small projects and contribute to unit-level and organizational initiatives with an objective of providing high quality value adding solutions to customers. If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Ability to develop value-creating strategies and models that enable clients to innovate, drive growth and increase their business profitability Good knowledge on software configuration management systems Awareness of latest technologies and Industry trends Logical thinking and problem solving skills along with an ability to collaborate Understanding of the financial processes for various types of projects and the various pricing models available Ability to assess the current processes, identify improvement areas and suggest the technology solutions One or two industry domain knowledge Client Interfacing skills Project and Team management

Posted 2 days ago

Apply

4.0 years

15 - 30 Lacs

Gurugram, Haryana, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

4.0 years

15 - 30 Lacs

Cuttack, Odisha, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

4.0 years

15 - 30 Lacs

Bhubaneswar, Odisha, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

4.0 years

15 - 30 Lacs

Kolkata, West Bengal, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

4.0 years

15 - 30 Lacs

Guwahati, Assam, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

4.0 years

15 - 30 Lacs

Raipur, Chhattisgarh, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

4.0 years

15 - 30 Lacs

Jamshedpur, Jharkhand, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

4.0 years

15 - 30 Lacs

Ranchi, Jharkhand, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

4.0 years

15 - 30 Lacs

Amritsar, Punjab, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

4.0 years

15 - 30 Lacs

Surat, Gujarat, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

4.0 years

15 - 30 Lacs

Ahmedabad, Gujarat, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

4.0 years

15 - 30 Lacs

Jaipur, Rajasthan, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

4.0 years

15 - 30 Lacs

Greater Lucknow Area

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

4.0 years

15 - 30 Lacs

Thane, Maharashtra, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

4.0 years

15 - 30 Lacs

Nashik, Maharashtra, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies