Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
India
On-site
Job Title: Senior Machine Learning Engineer (Azure ML + Databricks + MLOps) Experience: 5+ years in AI/ML Engineering Employment Type: Full-Time Job Summary: We are looking for a Senior Machine Learning Engineer with strong expertise in Azure Machine Learning and Databricks to lead the development and deployment of scalable AI/ML solutions. You’ll work with cross-functional teams to design, build, and optimize machine learning pipelines that power critical business functions. Key Responsibilities: Design, build, and deploy scalable machine learning models using Azure Machine Learning (Azure ML) and Databricks . Develop and maintain end-to-end ML pipelines for training, validation, and deployment. Collaborate with data engineers and architects to structure data pipelines on Azure Data Lake , Synapse , or Delta Lake . Integrate models into production environments using Azure ML endpoints , MLflow , or REST APIs . Monitor and maintain deployed models, ensuring performance and reliability over time. Use Databricks notebooks and PySpark to process and analyze large-scale datasets. Apply MLOps principles using tools like Azure DevOps , CI/CD pipelines , and MLflow for versioning and reproducibility. Ensure compliance with data governance, security, and responsible AI practices. Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or a related field. 5+ years of hands-on experience in machine learning or data science roles. Strong proficiency in Python , and experience with libraries like Scikit-learn , XGBoost , PyTorch , or TensorFlow . Deep experience with Azure Machine Learning services (e.g., workspaces, compute clusters, pipelines). Proficient in Databricks , including Spark (PySpark), notebooks, and Delta Lake. Strong understanding of MLOps, experiment tracking, model management, and deployment automation. Experience with data engineering tools (e.g., Azure Data Factory, Azure Data Lake, Azure Synapse). Preferred Skills: Azure certifications (e.g., Azure AI Engineer Associate , Azure Data Scientist Associate ). Familiarity with Kubernetes , Docker , and container-based deployments. Experience working with structured and unstructured data (NLP, time series, image data, etc.). Knowledge of cost optimization , security best practices , and scalability on Azure. Experience with A/B testing, monitoring model drift, and real-time inference. Job Types: Full-time, Permanent Benefits: Flexible schedule Paid sick time Paid time off Provident Fund Work Location: In person
Posted 18 hours ago
10.0 years
5 - 10 Lacs
Hyderābād
Remote
Join Amgen's Mission to Serve Patients If you feel like you’re part of something bigger, it’s because you are. At Amgen, our shared mission—to serve patients—drives all that we do. It is key to our becoming one of the world’s leading biotechnology companies. We are global collaborators who achieve together—researching, manufacturing, and delivering ever-better products that reach over 10 million patients worldwide. It’s time for a career you can be proud of. Specialist IS Software engineer Live What you will do Let’s do this. Let’s change the world. In this vital role We are looking for a creative and technically skilled Specialist IS Software engineer - Data Management Lead . This role will be responsible for leading data management initiatives collaborating across business, IT, and data governance teams. The ideal candidate will have extensive experience in configuring and implementing Collibra products, established track record of building high-quality data governance and data quality solutions with a strong hands-on design and engineering skills. The candidate must also possess strong analytical and communication skills. As a Collibra Lead Developer, you will play a key role in the design, implementation, and management of our Collibra Data Governance and Data Quality platform. You will work closely with stakeholders across the organization to ensure the successful deployment of data governance processes, solutions, and best practices. Building and integrating information systems to meet the company’s needs. Design and implement data governance frameworks, policies, and procedures within Collibra. Configure, implement, and maintain Collibra Data Quality Center to support enterprise-wide data quality initiatives Lead the implementation and configuration of Collibra Data Governance platform. Develop, customize, and maintain Collibra workflows, dashboards, and business rules. Collaborate with data stewards, data owners, and business analysts to understand data governance requirements and translate them into technical solutions Provide technical expertise and support to business users and IT teams on Collibra Data Quality functionalities. Collaborate with data engineers and architects to implement data quality solutions within data pipelines and data warehouses. Participate in data quality improvement projects, identifying root causes of data issues and implementing corrective actions Integrate Collibra with other enterprise data management systems (e.g., data catalogs, BI tools, data lakes). Provide technical leadership and mentoring to junior developers and team members. Troubleshoot and resolve issues with Collibra environment and data governance processes. Assist with training and enablement of business users on Collibra platform features and functionalities. Stay up to date with new releases, features, and best practices in Collibra and data governance. Basic Qualifications: Master’s degree in computer science & engineering preferred with 10+ years of software development experience OR, Bachelor’s degree in computer science & engineering preferred with 10+ years of software development experience Proven experience (7+ years) in data governance or data management roles. Strong experience with Collibra Data Governance platform, including design, configuration, and development. Hands-on experience with Collibra workflows, rules engine, and data stewardship processes. Experience with integrations between Collibra and other data management tools. Proficiency in SQL and scripting languages (e.g., Python, JavaScript). Strong problem-solving and troubleshooting skills. Excellent communication and collaboration skills to work with both technical and non-technical stakeholders Self-starter with strong communication and collaboration skills to work effectively with cross-functional teams. Excellent problem-solving skills and attention to detail. Domain knowledge of the Life sciences Industry Recent experience working in a Scaled Agile environment with Agile tools, e.g. Jira, Confluence, etc. Preferred Qualifications: Deep expertise in Collibra platform including Data Governance and Data Quality. In-depth knowledge of data governance principles, data stewardship processes, data quality concepts, data profiling and validation methodologies, techniques, and best practices. Hands-on experience in implementing and configuring Collibra Data Governance, Collibra Data Quality, including developing metadata ingestion, data quality rules, scorecards, and workflows. Strong experience in configuring and connecting to various data sources for metadata, data lineage, data profiling and data quality. Experience integrating data management capabilities (MDM, Reference Data) Good experience with Azure cloud services, Azure Data technologies and Databricks Solid understanding of relational database concepts and ETL processes. Proficient use of tools, techniques, and manipulation including programming languages (Python, PySpark, SQL etc.), for data profiling, and validation. Data modeling with tools like Erwin and knowledge of insurance industry standards (e.g., ACORD) and insurance data (policy, claims, underwriting, etc.). Familiarity with data visualization tools like Power BI. Good to Have Skills Willingness to work on AI Applications Experience with popular large language models Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills. Ability to work effectively with global, remote teams. High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Strong presentation and public speaking skills. Thrive What you can expect of us As we work to develop treatments that take care of others, we also work to care for our teammates’ professional and personal growth and well-being. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination In our quest to serve patients above all else, Amgen is the first to imagine, and the last to doubt. Join us. careers.amgen.com Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.
Posted 18 hours ago
5.0 - 8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Cortex is urgently hiring for the role : ''Data Engineer'' Experience: 5 to 8 years Location: Bangalore, Noida, and Hyderabad (Hybrid, weekly 2 Days office must) NP: Immediate to 10days only Key skills: Candidates Must have experience in Python, Kafka Stream, Pyspark, and Azure Databricks Role Overview We are looking for a highly skilled with expertise in Kafka, Python, and Azure Databricks (preferred) to drive our healthcare data engineering projects. The ideal candidate will have deep experience in real-time data streaming, cloud-based data platforms, and large-scale data processing. This role requires strong technical leadership, problem-solving abilities, and the ability to collaborate with cross-functional teams. Key Responsibilities Lead the design, development, and implementation of real-time data pipelines using Kafka, Python, and Azure Databricks. Architect scalable data streaming and processing solutions to support healthcare data workflows. Develop, optimize, and maintain ETL/ELT pipelines for structured and unstructured healthcare data. Ensure data integrity, security, and compliance with healthcare regulations (HIPAA, HITRUST, etc.). Collaborate with data engineers, analysts, and business stakeholders to understand requirements and translate them into technical solutions. Troubleshoot and optimize Kafka streaming applications, Python scripts, and Databricks workflows. Mentor junior engineers, conduct code reviews, and ensure best practices in data engineering. Stay updated with the latest cloud technologies, big data frameworks, and industry trends. If you are interested kindly send your resume to us by just clicking '' easy apply''. This job is posted by Aishwarya.K Business HR - Day recruitment Cortex Consultants LLC (US) | Cortex Consulting Pvt Ltd (India) | Tcell (Canada) US | India | Canada
Posted 18 hours ago
4.0 years
3 - 6 Lacs
Gurgaon
On-site
The Deputy Manager is primarily responsible for using data extraction tools to perform in-depth analysis of programs and opportunities in the collections business. The Deputy Manager will make recommendations to improve the business profitability or operational processes based on their analysis and design strategies to implement those recommendations. The role is also responsible to own syndication of findings and manage implementation with support. Responsibilities Coach new team members on technical skills and business knowledge.-5% Develop and implement analytics best practices and knowledge management practices.-5% Makes recommendations to improve business profitability or processes. Estimate opportunity size and develop business case. Manage implementation of ideas and project plans with minimal support. 30% Present and share data with other team members and to leadership independently. 10% Understand end-to-end business processes. Independently extract, prepare and analyze gigabytes of data to support business initiatives (e.g. profitability, performance, variance analysis etc). Develop solutions with minimal support. Develop techniques and computer algorithms for data analysis for making it meaningful and actionable. 50% Education MINIMUM REQUIREMENTS EDUCATION: Bachelor's FIELD OF STUDY: Strong and consistent academic record in engineering, quantitative or statistical field.. EXPERIENCE: 4-7 years experience in analytics or consulting including 2+ years in Financial Services. Language Required: English PREFERRED QUALIFICATIONS EDUCATION: Bachelor's FIELD OF STUDY: S trong and consistent academic record in engineering, quantitative or statistical field. EXPERIENCE: Required: 4-7 years experience in analytics or consulting. Expert knowledge of Azure / Python incl. Pandas, Pyspark / SQL. Demonstrated experience in unstructured problem solving and strong analytical aptitude. Advanced use of MS Office( Excel, PowerPoint). Strong Communication (Written and Verbal) Storyboarding and Presentation Skills Project Management Ability to multitask. What We Offer We understand the important balance between work and life, fun and professionalism, and corporation verse community. We strive to support your career aspirations and provide the benefits you need to live a more fulfilling life. Our compensation and benefits programs were created with an 'Employee-First Approach' focused on supporting, developing, and recognizing YOU. We offer a wide array of wellness and mental health initiatives, support volunteerism, and environmental efforts, encourage employee education through leadership training, skill-building, and tuition reimbursements, and always strive to provide promotion opportunities from within. All these things are just a small way to show our employees that we recognize their value, we understand what is important to them, and we reward their contributions. About Us Headquartered in the United States, Encore Capital Group (Encore) is a publicly traded international specialty finance company operating in various countries around the globe. Through our businesses - such as Midland Credit Management and Cabot Credit Management - we help consumers to restore their financial health as we further our Mission of creating pathways to economic freedom. Our commitment to building a positive workplace culture and a best-in-class employee experience have earned us accolades including Great Place to Work® certifications in many geographies where we operate. If you have a passion for helping others and thrive at a company that values innovation, inclusion and excellence, then Encore Capital Group is the right place for you. Encore Capital Group and all of its subsidiaries are proud to be an equal opportunity employer and are committed to fostering an inclusive and welcoming environment where everyone feels they belong. We encourage candidates from all backgrounds to apply. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, or disability status, or any other status protected under applicable law. If you wish to discuss potential accommodations related to applying for employment, please contact careers.india@mcmcg.com
Posted 19 hours ago
2.0 years
6 - 8 Lacs
Guwahati
On-site
Full Time Guwahati Experience - CTC- Best In Industry Job Description We are seeking a talented and experienced Data Engineer / Python Backend Engineer to join our dynamic team. The ideal candidate should have a strong background in Python development, with proficiency in backend frameworks such as FastAPI and Django. Additionally, they should possess solid expertise in data engineering concepts and tools, including Pandas, Numpy and Dataframe API. Experience with data warehousing, data modeling, and scaling techniques is highly desirable. Roles and Responsibilities Design, develop, and maintain backend services and APIs using FastAPI and Django. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Implement data engineering pipelines for processing, transforming, and analyzing large datasets. Optimize data storage and retrieval processes for performance and scalability. Ensure data quality and integrity through rigorous testing and validation procedures. Stay up-to-date with emerging technologies and best practices in data engineering and backend development. Required Skills Bachelor’s degree in computer science, Engineering, or related field. 2+ years of experience in Python development, with a focus on backend frameworks like FastAPI and Django. Expertise in Object Oriented Design, Database Design is must. Proficiency in database technologies such as PostgreSQL and MySQL. Hands-on experience writing SQL queries and knowledge about query performance optimization. Strong understanding of data engineering frameworks, including Pandas, Numpy, and Polars(optional). Familiarity with data warehousing concepts and methodologies. Solid grasp of scaling techniques and optimization strategies for handling large datasets. Nice to Have: Familiarity with Pyspark and Kafka. Experience with containerization tools like Docker. Familiarity with cloud platforms such as AWS or Azure.
Posted 19 hours ago
3.0 - 5.0 years
3 - 8 Lacs
Chennai
On-site
3 - 5 Years 5 Openings Bangalore, Chennai, Kochi, Trivandrum Role description Role Proficiency: Independently develops error free code with high quality validation of applications guides other developers and assists Lead 1 – Software Engineering Outcomes: Understand and provide input to the application/feature/component designs; developing the same in accordance with user stories/requirements. Code debug test document and communicate product/component/features at development stages. Select appropriate technical options for development such as reusing improving or reconfiguration of existing components. Optimise efficiency cost and quality by identifying opportunities for automation/process improvements and agile delivery models Mentor Developer 1 – Software Engineering and Developer 2 – Software Engineering to effectively perform in their roles Identify the problem patterns and improve the technical design of the application/system Proactively identify issues/defects/flaws in module/requirement implementation Assists Lead 1 – Software Engineering on Technical design. Review activities and begin demonstrating Lead 1 capabilities in making technical decisions Measures of Outcomes: Adherence to engineering process and standards (coding standards) Adherence to schedule / timelines Adhere to SLAs where applicable Number of defects post delivery Number of non-compliance issues Reduction of reoccurrence of known defects Quick turnaround of production bugs Meet the defined productivity standards for project Number of reusable components created Completion of applicable technical/domain certifications Completion of all mandatory training requirements Outputs Expected: Code: Develop code independently for the above Configure: Implement and monitor configuration process Test: Create and review unit test cases scenarios and execution Domain relevance: Develop features and components with good understanding of the business problem being addressed for the client Manage Project: Manage module level activities Manage Defects: Perform defect RCA and mitigation Estimate: Estimate time effort resource dependence for one's own work and others' work including modules Document: Create documentation for own work as well as perform peer review of documentation of others' work Manage knowledge: Consume and contribute to project related documents share point libraries and client universities Status Reporting: Report status of tasks assigned Comply with project related reporting standards/process Release: Execute release process Design: LLD for multiple components Mentoring: Mentor juniors on the team Set FAST goals and provide feedback to FAST goals of mentees Skill Examples: Explain and communicate the design / development to the customer Perform and evaluate test results against product specifications Develop user interfaces business software components and embedded software components 5 Manage and guarantee high levels of cohesion and quality6 Use data models Estimate effort and resources required for developing / debugging features / components Perform and evaluate test in the customer or target environment Team Player Good written and verbal communication abilities Proactively ask for help and offer help Knowledge Examples: Appropriate software programs / modules Technical designing Programming languages DBMS Operating Systems and software platforms Integrated development environment (IDE) Agile methods Knowledge of customer domain and sub domain where problem is solved Additional Comments: Design, develop, and optimize large-scale data pipelines using Azure Databricks (Apache Spark). Build and maintain ETL/ELT workflows and batch/streaming data pipelines. Collaborate with data analysts, scientists, and business teams to support their data needs. Write efficient PySpark or Scala code for data transformations and performance tuning. Implement CI/CD pipelines for data workflows using Azure DevOps or similar tools. Monitor and troubleshoot data pipelines and jobs in production. Ensure data quality, governance, and security as per organizational standards. Skills Databricks,Adb,Etl About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.
Posted 19 hours ago
3.0 years
11 - 24 Lacs
Chennai
On-site
Job Description Data Engineer, Chennai We’re seeking a highly motivated Data Engineer to join our agile, cross-functional team and drive end-to-end data pipeline development in a cloud-native, big data ecosystem. You’ll leverage ETL/ELT best practices and data lakehouse paradigms to deliver scalable solutions. Proficiency in SQL, Python, Spark, and modern data orchestration tools (e.g. Airflow) is essential, along with experience in CI/CD, DevOps, and containerized environments like Docker and Kubernetes. This is your opportunity to make an impact in a fast-paced, data-driven culture. Responsibilities Responsible for data pipeline development and maintenance. Contribute to development, maintenance, testing strategy, design discussions, and operations of the team. Participate in all aspects of agile software development including design, implementation, and deployment. Responsible for the end-to-end lifecycle of new product features / components. Ensuring application performance, uptime, and scale, maintaining high standards of code quality and thoughtful application design. Work with a small, cross-functional team on products and features to drive growth. Learning new tools, languages, workflows, and philosophies to grow. Research and suggest new technologies for boosting the product. Have an impact on product development by making important technical decisions, influencing the system architecture, development practices and more. Qualifications Excellent team player with strong communication skills. B.Sc. in Computer Sciences or similar. 3-5 years of experience in Data Pipeline development. 3-5 years of experience in PySpark / Databricks. 3-5 years of experience in Python / Airflow. Knowledge of OOP and design patterns. Knowledge of server-side technologies such as Java, Spring Experience with Docker containers, Kubernetes and Cloud environments Expertise in testing methodologies (Unit-testing, TDD, mocking). Fluent with large scale SQL databases. Good problem-solving and analysis abilities. Requirements - Advantage Experience with Azure cloud services. Experience with Agile Development methodologies. Experience with Git. Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion
Posted 19 hours ago
4.0 - 5.0 years
5 - 9 Lacs
Noida
On-site
Job Information: Work Experience: 4-5 years Industry: IT Services Job Type: FULL TIME Location: Noida, India Job Overview: We are seeking a skilled Data Engineer with 4-5 years of experience to design, build, and maintain scalable data pipelines and analytics solutions within the AWS cloud environment. The ideal candidate will leverage AWS Glue, PySpark, and QuickSight to deliver robust data integration, transformation, and visualization capabilities. This role is critical in supporting business intelligence, analytics, and reporting needs across the organization. Key Responsibilities: Design, develop, and maintain data pipelines using AWS Glue, PySpark, and related AWS services to extract, transform, and load (ETL) data from diverse sources. Build and optimize data warehouse/data lake infrastructure on AWS, ensuring efficient data storage, processing, and retrieval. Develop and manage ETL processes to source data from various systems, including databases, APIs, and file storage, and create unified data models for analytics and reporting. Implement and maintain business intelligence dashboards using Amazon QuickSight, enabling stakeholders to derive actionable insights. Collaborate with cross-functional teams (business analysts, data scientists, product managers) to understand requirements and deliver scalable data solutions. Ensure data quality, integrity, and security throughout the data lifecycle, implementing best practices for governance and compliance. Support self-service analytics by empowering internal users to access and analyze data through QuickSight and other reporting tools. Troubleshoot and resolve data pipeline issues, optimizing performance and reliability as needed. Required Skills & Qualifications: Proficiency in AWS cloud services: AWS Glue, QuickSight, S3, Lambda, Athena, Redshift, EMR, and related technologies. Strong experience with PySpark for large-scale data processing and transformation. Expertise in SQL and data modeling for relational and non-relational databases. Experience building and optimizing ETL pipelines and data integration workflows. Familiarity with business intelligence and visualization tools, especially Amazon QuickSight. Knowledge of data governance, security, and compliance best practices. Strong programming skills in Python; experience with automation and scripting. Ability to work collaboratively in agile environments and manage multiple priorities effectively. Excellent problem-solving and communication skills. Preferred Qualifications: AWS certification (e.g., AWS Certified Data Analytics – Specialty, AWS Certified Developer). Good to Have Skills: Understanding of machine learning, deep learning and Generative AI concepts, Regression, Classification, Predictive modeling, Clustering, Deep Learning. Interview Process Internal Assessment 3 Technical Rounds
Posted 19 hours ago
5.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Description Essential Functions Independently develop and deploy ETL jobs in a fast-paced object oriented environment Capable enough to understand and Receive business requirements from clients via a Business Analyst/architect/development lead to successfully develop applications, functions, and processes. Conducts and is accountable for unit testing on development assignments. Must be detail-oriented with ability to follow-through on issues. Must be able to work on and manage multiple tasks in addition to working with other areas within the department. Utilizes numerous sources to obtain and build development skills. Enhances existing applications to meet the needs of ongoing efforts within software platforms. Records and tracks time worked on projects and assignments. Develops a general understanding of TSYS/Global Payments, software platforms, and the credit card industry. Participates in team, department, and division meetings as required. Performs other duties as assigned. Skills/technical Knowledge 5 to 8 years of strong development background in ETL tools like GCP-Data Flow , Pyspark , SSIS, Snowflake, DBT Experience in Google cloud platform - GCP Pub/Sub, Datastore, BigQuery, AppEngine, Compute Engine, Cloud SQL, Memory Store, Redis etc Experience in AWS/SNOWFLAKE/AZURE is preferred Proficient in Java, Python , Pyspark Proficient in GCP-Big Query , Composer , AirFlow , Pub Sub , Cloud storage Experience in building tools (e.g., Maven, Gradle etc.) Proficient in Code repo management, branching strategy, Version controlling using GIT, VSTS & Teamforge etc Developing an application using Eclipse IDE or IntelliJ Excellent knowledge of Relational Databases, SQL & JDBC drivers Experience with API Gateways - Datapower, APIM , Apigee etc Strong analytical, planning, and organizational skills with an ability to manage competing demand Excellent communication skills, verbal and written; should be able to collaborate across business teams (stakeholders) and other technology groups as needed. Experience in NO-SQL databases is preferred Exposure to Payments industry is a plus Minimum Qualification Minimum 5 To 8 Years Of Relevant Experience. Software Engineering, Payment Information Systems or any Technical degree; additional experience in lieu of degree will be considered
Posted 19 hours ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Summary We are looking for a skilled AWS Data Engineer with strong experience in building and managing cloud-based ETL pipelines using AWS Glue, Python/PySpark, and Athena, along with data warehousing expertise in Amazon Redshift. The ideal candidate will be responsible for designing, developing, and maintaining scalable data solutions in a cloud-native environment. Design and implement ETL workflows using AWS Glue, Python, and PySpark. Develop and optimize queries using Amazon Athena and Redshift. Build scalable data pipelines to ingest, transform, and load data from various sources. Ensure data quality, integrity, and security across AWS services. Collaborate with data analysts, data scientists, and business stakeholders to deliver data solutions. Monitor and troubleshoot ETL jobs and cloud infrastructure performance. Automate data workflows and integrate with CI/CD pipelines. Required Skills & Qualifications Hands-on experience with AWS Glue, Athena, and Redshift. Strong programming skills in Python and PySpark. Experience with ETL design, implementation, and optimization. Familiarity with S3, Lambda, CloudWatch, and other AWS services. Understanding of data warehousing concepts and performance tuning in Redshift. Experience with schema design, partitioning, and query optimization in Athena. Proficiency in version control (Git) and agile development practices.
Posted 19 hours ago
0 years
0 Lacs
Bhubaneswar, Odisha, India
On-site
Azure Data Engineer (ADF , ADB, Pyspark, Synapse) Must-Have PySpark, SQL, Azure Services (ADF, Databrics, Synapse) Good-to-Have Python, Azure Key Vault EXP- 10 to 12 Location- Bhubaneswar
Posted 20 hours ago
10.0 - 12.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
TCS present an excellent opportunity for Data architect Job Description: Skills: AWS, Glue, Redshift, PySpark Location: Pune / Kolkata Experience: 10 to 12 Years Strong hands-on experience in Python programming and PySpark. Experience using AWS services (RedShift, Glue, EMR, S3 & Lambda) Experience working with Apache Spark and Hadoop ecosystem. Experience in writing and optimizing SQL for data manipulations. Good Exposure to scheduling tools. Airflow is preferable. Must – Have Data Warehouse Experience with AWS Redshift or Hive. Experience in implementing security measures for data protection. Expertise to build/test complex data pipelines for ETL processes (batch and near real time) Readable documentation of all the components being developed. Knowledge of Database technologies for OLTP and OLAP workloads.
Posted 20 hours ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Python PySpark ETL Data Pipeline Big Data AWS GCP Azure Data Warehousing Spark Hadoop A day in the life of an Infoscion As part of the Infosys consulting team, your primary role would be to get to the heart of customer issues, diagnose problem areas, design innovative solutions and facilitate deployment resulting in client delight. You will develop a proposal by owning parts of the proposal document and by giving inputs in solution design based on areas of expertise. You will plan the activities of configuration, configure the product as per the design, conduct conference room pilots and will assist in resolving any queries related to requirements and solution design You will conduct solution/product demonstrations, POC/Proof of Technology workshops and prepare effort estimates which suit the customer budgetary requirements and are in line with organization’s financial guidelines Actively lead small projects and contribute to unit-level and organizational initiatives with an objective of providing high quality value adding solutions to customers. If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Ability to develop value-creating strategies and models that enable clients to innovate, drive growth and increase their business profitability Good knowledge on software configuration management systems Awareness of latest technologies and Industry trends Logical thinking and problem solving skills along with an ability to collaborate Understanding of the financial processes for various types of projects and the various pricing models available Ability to assess the current processes, identify improvement areas and suggest the technology solutions One or two industry domain knowledge Client Interfacing skills Project and Team management
Posted 20 hours ago
5.0 - 8.0 years
0 Lacs
India
Remote
Mandatory skill- Azure Databricks, Datafactory, Pyspark, SQL Experience- 5 to 8 years Location- Remote Key Responsibilities: Design and build data pipelines and ETL/ELT workflows using Azure Databricks and Azure Data Factory Ingest, clean, transform, and process large datasets from diverse sources (structured and unstructured) Implement Delta Lake solutions and optimize Spark jobs for performance and reliability Integrate Azure Databricks with other Azure services including Data Lake Storage, Synapse Analytics, and Event Hubs
Posted 20 hours ago
4.0 years
15 - 30 Lacs
Gurugram, Haryana, India
Remote
Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 20 hours ago
4.0 years
15 - 30 Lacs
Cuttack, Odisha, India
Remote
Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 21 hours ago
4.0 years
15 - 30 Lacs
Bhubaneswar, Odisha, India
Remote
Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 21 hours ago
4.0 years
15 - 30 Lacs
Kolkata, West Bengal, India
Remote
Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 21 hours ago
4.0 years
15 - 30 Lacs
Guwahati, Assam, India
Remote
Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 21 hours ago
4.0 years
15 - 30 Lacs
Raipur, Chhattisgarh, India
Remote
Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 21 hours ago
4.0 years
15 - 30 Lacs
Jamshedpur, Jharkhand, India
Remote
Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 21 hours ago
4.0 years
15 - 30 Lacs
Ranchi, Jharkhand, India
Remote
Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 21 hours ago
4.0 years
15 - 30 Lacs
Amritsar, Punjab, India
Remote
Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 21 hours ago
15.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Looking for a Product Engineering Leader who has experience in building scalable B2B /B2E Products with the following experience/skills: Should have experience in developing data driven and workflow-based products spanning multiple clouds (AWS/Azure/Google). Should have led engineering teams to develop Enterprise-grade products which can scale on demand and are secure. Ability to conceptualize products, architect /design and take them to customers in a short span of time. Should have a passion for building highly scalable and performant products. Apply creative thinking & approach to resolve technical solutions that further business goals and align with Product NFR goals like performance, reliability, scalability, usability, security, flexibility, and cost. Conceptualize and present the vision & value of proposed architectures and solutions to a wide range of audiences in alignment with business priorities and objectives. An ability to have technology centric discussions with customers, understand their requirements, manage expectations and provide solutions for their business needs. Collaborate efficiently with key partners including Product managers & owners, business and engineering teams to identify and define solutions for complex business and technical requirements. Any experience in Life sciences /Commercial/Incentive Compensation domain is an added plus. Experience in building data centric (dealing with large data sets) products. Behavioral Skills: Product Mindset - Experience in agile methodology-based product development. Able to define incremental development paths for functionality to achieve future vision. Task Management – Have team management experience and should be able to plan tasks, discuss and work on priorities. Communication – Able to convey ideas and information clearly and accurately to self or others whether in writing or verbal. Education & Experience: 15+ Years of experience working in IT 7+ years of experience in product development and core engineering Bachelor’s/master’s degree in computer science from Tier 1-2 college. Technology Exposures: React JS, Python, PySpark, Snowflake, Postgres, AWS/Azure/Google Cloud, Docker, EKS. Exposure around AI/GenAI Technologies an added plus. Apart from above, even if he has good engineering experience in Java/JEE/.Net Technologies, that will also work as long as candidate has good product engineering background.
Posted 21 hours ago
4.0 years
15 - 30 Lacs
Surat, Gujarat, India
Remote
Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 21 hours ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
PySpark, a powerful data processing framework built on top of Apache Spark and Python, is in high demand in the job market in India. With the increasing need for big data processing and analysis, companies are actively seeking professionals with PySpark skills to join their teams. If you are a job seeker looking to excel in the field of big data and analytics, exploring PySpark jobs in India could be a great career move.
Here are 5 major cities in India where companies are actively hiring for PySpark roles: 1. Bangalore 2. Pune 3. Hyderabad 4. Mumbai 5. Delhi
The estimated salary range for PySpark professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 6-8 lakhs per annum, while experienced professionals can earn upwards of INR 15 lakhs per annum.
In the field of PySpark, a typical career progression may look like this: 1. Junior Developer 2. Data Engineer 3. Senior Developer 4. Tech Lead 5. Data Architect
In addition to PySpark, professionals in this field are often expected to have or develop skills in: - Python programming - Apache Spark - Big data technologies (Hadoop, Hive, etc.) - SQL - Data visualization tools (Tableau, Power BI)
Here are 25 interview questions you may encounter when applying for PySpark roles:
As you explore PySpark jobs in India, remember to prepare thoroughly for interviews and showcase your expertise confidently. With the right skills and knowledge, you can excel in this field and advance your career in the world of big data and analytics. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough