Jobs
Interviews

16305 Kafka Jobs - Page 29

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 years

3 - 10 Lacs

India

Remote

Job Title - Sr. Data Engineer Experience - 2+ Years Location - Indpre (onsite) Industry - IT Job Type - Full ime Roles and Responsibilities- 1. Design and develop scalable data pipelines and workflows for data ingestion, transformation, and integration. 2. Build and maintain data storage systems, including data warehouses, data lakes, and relational databases. 3. Ensure data accuracy, integrity, and consistency through validation and quality assurance processes. 4. Collaborate with data scientists, analysts, and business teams to understand data needs and deliver tailored solutions. 5. Optimize database performance and manage large-scale datasets for efficient processing. 6. Leverage cloud platforms (AWS, Azure, or GCP) and big data technologies (Hadoop, Spark, Kafka) for building robust data solutions. 7. Automate and monitor data workflows using orchestration frameworks such as Apache Airflow. 8. Implement and enforce data governance policies to ensure compliance and data security. 9. Troubleshoot and resolve data-related issues to maintain seamless operations. 10. Stay updated on emerging tools, technologies, and trends in data engineering. Skills and Knowledge- 1. Core Skills: ● Proficient in Python (libraries: Pandas, NumPy) and SQL. ● Knowledge of data modeling techniques, including: ○ Entity-Relationship (ER) Diagrams ○ Dimensional Modeling ○ Data Normalization ● Familiarity with ETL processes and tools like: ○ Azure Data Factory (ADF) ○ SSIS (SQL Server Integration Services) 2. Cloud Expertise: ● AWS Services: Glue, Redshift, Lambda, EKS, RDS, Athena ● Azure Services: Databricks, Key Vault, ADLS Gen2, ADF, Azure SQL ● Snowflake 3. Big Data and Workflow Automation: ● Hands-on experience with big data technologies like Hadoop, Spark, and Kafka. ● Experience with workflow automation tools like Apache Airflow (or similar). Qualifications and Requirements- ● Education: ○ Bachelor’s degree (or equivalent) in Computer Science, Information Technology, Engineering, or a related field. ● Experience: ○ Freshers with strong understanding, internships and relevant academic projects are welcome. ○ 2+ years of experience working with Python, SQL, and data integration or visualization tools is preferred. ● Other Skills: ○ Strong communication skills, especially the ability to explain technical concepts to non-technical stakeholders. ○ Ability to work in a dynamic, research-oriented team with concurrent projects. Job Types: Full-time, Permanent Pay: ₹300,000.00 - ₹1,000,000.00 per year Benefits: Paid sick time Provident Fund Work from home Schedule: Day shift Monday to Friday Weekend availability Supplemental Pay: Performance bonus Ability to commute/relocate: Niranjanpur, Indore, Madhya Pradesh: Reliably commute or planning to relocate before starting work (Preferred) Experience: Data Engineer: 2 years (Preferred) Work Location: In person Application Deadline: 31/08/2025

Posted 1 week ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Velotio Technologies is a product engineering company working with innovative startups and enterprises. We are a certified Great Place to Work® and recognized as one of the best companies to work for in India. We have provided full-stack product development for 325+ startups across the globe building products in the cloud-native, data engineering, B2B SaaS, IoT & Machine Learning space. Our team of 400+ elite software engineers solves hard technical problems while transforming customer ideas into successful products. We are seeking a highly motivated Quality Assurance (QA) Engineer to join our team and play a critical role in ensuring the quality, performance, and reliability of our product. As a QA Engineer, you will be responsible for testing complex data pipelines, distributed systems, and real-time processing modules that form the backbone of our platform. You will collaborate closely with developers, product managers, and other stakeholders to deliver a robust and scalable product that meets the highest quality standards. Requirements Analyze technical and functional specifications of the Data Highway product to create comprehensive test strategies Develop detailed test plans, test cases, and test scripts for functional, performance, and regression testing Define testing criteria and acceptance standards for data pipelines, APIs, and distributed systems Execute manual and automated tests for various components of the Data Highway, including data ingestion, processing, and output modules Perform end-to-end testing of data pipelines to ensure accuracy, integrity, and scalability.Validate real-time and batch data processing flows to ensure performance and reliability Identify, document, and track defects using tools like JIRA, providing clear and actionable descriptions for developers Collaborate with development teams to debug issues, verify fixes, and prevent regression Perform root cause analysis to identify underlying problems and recommend process improvements Conduct performance testing to evaluate system behavior under various load conditions, including peak usage scenarios Monitor key metrics such as throughput, latency, and resource utilization to identify bottlenecks and areas for optimization Test APIs for functionality, reliability, and adherence to RESTful principles Validate integrations with external systems and third-party services to ensure seamless data flow Work closely with cross-functional teams, including developers, product managers, and DevOps, to align on requirements and testing priorities Participate in Agile ceremonies such as sprint planning, daily stand-ups, and retrospectives to ensure smooth communication and collaboration Provide regular updates on test progress, coverage, and quality metrics to stakeholders Collaborate with automation engineers to identify critical test cases for automation Use testing tools like Postman, JMeter, and Selenium for API, performance, and UI testing as required Assist in maintaining and improving automated test frameworks for the Data Highway product Validate data transformations, mappings, and consistency across data pipelines Ensure the security of data in transit and at rest, testing for vulnerabilities and compliance with industry standards Maintain detailed and up-to-date documentation for test plans, test cases, and defect reports Contribute to user guides and knowledge bases to support product usage and troubleshooting Desired Skills & Experience: Bachelor's degree in Computer Science, Information Technology, or a related field, or equivalent professional experience 3+ years of experience as a Quality Assurance Engineer, preferably in testing data pipelines, distributed systems, or SaaS products Strong understanding of data pipelines, ETL processes, and distributed systems testing Experience with test management and defect-tracking tools like JIRA, TestRail, Zephyr Proficiency in API testing using tools like Postman or SoapUI Familiarity with SQL and database testing for data validation and consistency Knowledge of performance testing tools like JMeter, LoadRunner, or similar Experience with real-time data processing systems like Kafka or similar technologies Familiarity with CI/CD pipelines and DevOps practices Exposure to automation frameworks and scripting languages such as Python or JavaScript Strong analytical and problem-solving skills with attention to detail Excellent communication and collaboration skills to work effectively with cross-functional teams Proactive and self-driven approach to identifying and resolving quality issues Benefits Our Culture : We have an autonomous and empowered work culture encouraging individuals to take ownership and grow quickly Flat hierarchy with fast decision making and a startup-oriented "get things done" culture A strong, fun & positive environment with regular celebrations of our success. We pride ourselves in creating an inclusive, diverse & authentic environment We want to hire smart, curious, and ambitious folks, so please reach out even if you do not have all of the requisite experience. We are looking for engineers with the potential to grow! At Velotio, we embrace diversity. Inclusion is a priority for us, and we are eager to foster an environment where everyone feels valued. We welcome applications regardless of ethnicity or cultural background, age, gender, nationality, religion, disability or sexual orientation.

Posted 1 week ago

Apply

0 years

0 Lacs

Andhra Pradesh

On-site

We need a Java Senior Developer with expertise in Spring Boot, Microservices, AWS, Kafka, and Kubernetes to build and maintain high-performance applications. #1) Develop and maintain Java-based microservices using Spring Boot. #2) Integrate with Kafka for event-driven architectures. #3) Deploy and manage applications on AWS (EKS, ECS). #4) Optimize performance using ElastiCache, RDS, etc. #5) Collaborate with architects and DevOps teams. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

We're looking for a DevOps Engineer This role is Office Based, Pune Office We are looking for a skilled DevOps Engineer with hands-on experience in Kubernetes, CI/CD pipelines, cloud infrastructure (AWS/GCP), and observability tooling. You will be responsible for automating deployments, maintaining infrastructure as code, and optimizing system reliability, performance, and scalability across environments. In this role, you will… Develop and maintain CI/CD pipelines to automate testing, deployments, and rollbacks across multiple environments. Manage and troubleshoot Kubernetes clusters (EKS, AKS, GKE) including networking, autoscaling, and application deployments. Collaborate with development and QA teams to streamline code integration, testing, and deployment workflows. Automate infrastructure provisioning using tools like Terraform and Helm. Monitor and improve system performance using tools like Prometheus, Grafana, and the ELK stack. Set up and maintain Kibana dashboards, and ensure high availability of logging and monitoring systems. Manage cloud infrastructure on AWS and GCP, optimizing for performance, reliability, and cost. Build unified observability pipelines by integrating metrics, logs, and traces. Participate in on-shift rotations, handling incident response and root cause analysis, and continuously improve automation and observability. Write scripts and tools in Bash, Python, or Go to automate routine tasks and improve deployment efficiency. You’ve Got What It Takes If You Have… 3+ years of experience in a DevOps, SRE, or Infrastructure Engineering role. Bachelor's degree in Computer Science, IT, or related field. Strong understanding of Linux systems, cloud platforms (AWS/GCP), and containerized microservices. Proficiency with Kubernetes, CI/CD systems, and infrastructure automation. Experience with monitoring/logging tools: Prometheus, Grafana, InfluxDB ELK stack (Elasticsearch, Logstash, Kibana) Familiarity with incident management tools (e.g., PagerDuty) and root cause analysis processes. Basic working knowledge of: Kafka – monitoring topics and consumer health ElastiCache/Redis – caching patterns and diagnostics InfluxDB – time-series data and metrics collection Our Culture Spark Greatness. Shatter Boundaries. Share Success. Are you ready? Because here, right now – is where the future of work is happening. Where curious disruptors and change innovators like you are helping communities and customers enable everyone – anywhere – to learn, grow and advance. To be better tomorrow than they are today. Who We Are Cornerstone powers the potential of organizations and their people to thrive in a changing world. Cornerstone Galaxy, the complete AI-powered workforce agility platform, meets organizations where they are. With Galaxy, organizations can identify skills gaps and development opportunities, retain and engage top talent, and provide multimodal learning experiences to meet the diverse needs of the modern workforce. More than 7,000 organizations and 100 million+ users in 180+ countries and in nearly 50 languages use Cornerstone Galaxy to build high-performing, future-ready organizations and people today. Check us out on LinkedIn , Comparably , Glassdoor , and Facebook !

Posted 1 week ago

Apply

10.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Job Description: At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. We do this by driving Responsible Growth and delivering for our clients, teammates, communities and shareholders every day. Being a Great Place to Work is core to how we drive Responsible Growth. This includes our commitment to being an inclusive workplace, attracting and developing exceptional talent, supporting our teammates’ physical, emotional, and financial wellness, recognizing and rewarding performance, and how we make an impact in the communities we serve. Bank of America is committed to an in-office culture with specific requirements for office-based attendance and which allows for an appropriate level of flexibility for our teammates and businesses based on role-specific considerations. At Bank of America, you can build a successful career with opportunities to learn, grow, and make an impact. Join us! Job Description: The Markets Application Production Services (MAPS) group is a global group responsible for the management of production systems across Global Markets Technology. The group works closely with the business and provides application support. The group closely interacts with the development and infrastructure teams to manage all changes to the production environment. MAPS has a strong focus on operational excellence and process improvement. Bank of America Merrill Lynch is looking to hire an experienced Application Support Analyst to join our Global Markets Post Trade Technology and Operations support - Markets Application Production Services Team. You will join a regional team based in several locations whose primary focus will be on providing front line support for Equity, Derivatives, Clearing, and Settlement Applications related to Global Markets Operations & Middle Office. This is an excellent opportunity to join a well-established team; supporting distributed platforms and Oracle based applications while partnering with our development team to rollout support for state-of-the-art, real-time, high availability systems developed with cutting edge technologies. Responsibilities: Deliver application support for in-house applications and vendor products used by Global Markets Operations teams in India and region Triage and Manage production Incidents to restore service as swiftly as possible. Manage clear and crisp incident communications to a variety of stakeholders Adhere and oversee adherence to the enterprise defined standard operating procedures Diagnose and resolve complex issues involving root cause analysis and end to end coordination and support of the problem resolution process Ensure the documentation of problem resolution processes and procedures is maintained to the highest quality and accuracy Ability to correlate events across multiple systems to proactively surface and resolve deep, underlying issues The candidate will be required to look across the entire production environment to aid continuous improvement with the state and supportability of production systems including rotational weekend support and rotational business events support outside of business hours Build and maintain relationships with business users and other stakeholders. Work closely with development and infrastructure teams to ensure that issues and defects are reported and actioned to meet business requirements and timelines Learn, expand, and incorporate application support requirements across global operations teams while building APAC presence with teams across Singapore, Australia, Japan, and India Work closely with other MAPS team members across the Asia Pacific region and globally to ensure consistency in service stands and delivery Skills: Education at degree level in engineering or science discipline 10+ years of strong application support experience in banking/finance industry, especially Markets Desirable to have hands-on work experience at functional or shift lead capacity with excellent understanding of ITIL concepts around Incident, Problem and Change management Willing and able to lead Incidents as they occur. Flexible approach to adapt considering shifting priorities or changing conditions Good Knowledge of infrastructure systems, platforms, databases & middleware Troubleshooting and analyzing logs using Linux command line interfaces, Splunk, Kibana and other monitoring or log aggregation systems / tools is a must. Advanced Excel knowledge Excellent verbal and written communication skills and able to influence, facilitate, and collaborate Strong analytical, problem solving and troubleshooting skills to be able to thrive in a time sensitive and complex production environment Creative and innovative, able to find solutions for continuous improvement, and operational excellence Collaborative team player who can work independently where needed. Comfortable in a multicultural environment across a multi-region production support landscape Ability and experience in leading a matrix functional team would be an advantage Stakeholder management experience and the ability to build relationships and form partnership with the users when dealing with production issues and providing the support service to the user base Good understanding of capacity management and assessment Knowledge of Post-Trade Lifecycle, Trade Processing, Clearing, Matching and Settlement is desirable Customer focus / Client service orientation: An underlying desire to service clients and a motivation to ensure that business needs are met Desired Technical Skills: The candidate must demonstrate strong working knowledge of: OS (Windows/Linux, virtual compute) based infrastructure Database Technologies – Oracle, PL/SQL, SQL Server Scripting Languages –Shell, Bash, Python Monitoring Technology – ITRS Genos / Splunk / Dynatrace etc. Pub/Sub messaging - IBM MQ, Kafka, Tibco EMS, Distributed Event Processing, AMPS Ansible and Autosys scheduling

Posted 1 week ago

Apply

2.0 years

0 Lacs

Navi Mumbai, Maharashtra, India

Remote

Job Posting Title Software Engineer Job Description The Role : In this role, you will be part of the development of enterprise and web applications within Search Services team. The candidate would need to be a solid individual contributor who has experience in doing agile development, who has worked on complex architecture like cloud, load balanced systems, Restful API, Search Engines, NoSQL databases, high-performing systems etc. An ideal candidate can create scalable, flexible technical solutions, understand and support existing systems, study their enterprise complexities and develop state of art systems with modern software development practices. The candidate needs to have good grasping abilities to pick up new technologies and frameworks. Responsibilities : Design & develop web and enterprise solutions to be flexible, scalable & extensible using Java/J2EE in AWS cloud environment. Good working experience in OO analysis & design using common design patterns Enforce good agile practices like test driven development, Continuous Integration and improvement. Implement enhancements to improve the reliability, performance, and usability of our applications. Motivation to learn innovative trade of programming, debugging and deploying Self-starter, with excellent self-study skills and growth aspirations A good team player with ability to meet tight deadlines in a fast-paced environment Excellent written and verbal communication skills. Flexible attitude, perform under pressure. Requirements: These are the most important skills, qualities, etc. that we’d like for this role. Hands-on in Java/ Web-services / Spring /SpringBoot Very Strong knowledge of databases and hands on MS SQL/MySQL/PostgreSQL and NoSQL DB Understanding of Open Search a big plus. Good understanding of Object oriented design, Design Patterns, Enterprise Integration Patterns Experience with troubleshooting and debugging techniques. Hands-on experience on AWS services. Has done development or debugging on Linux/ Unix platforms. Ability to work independently and as part of a team. Experience with DevOps practices and tools. Minimum 2 years of experience in software development or a related field. Good to Have: Machine Learning knowledge. Exposure to Capital Market domain preferred (Indexes, Equity etc.) Knowledge of RabbitMQ and Kafka Morningstar’s hybrid work environment gives you the opportunity to work remotely and collaborate in-person each week. We’ve found that we’re at our best when we’re purposely together on a regular basis, at least three days each week. A range of other benefits are also available to enhance flexibility as needs change. No matter where you are, you’ll have tools and resources to engage meaningfully with your global colleagues. I10_MstarIndiaPvtLtd Morningstar India Private Ltd. (Delhi) Legal Entity

Posted 1 week ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Roles and Responsibilities: · Application Development: Design, develop, and maintain scalable applications using TypeScript. · Feature Implementation: Collaborate with other team members to define and implement new features based on requirements. · API Development: Create and optimize GraphQL APIs · Code Quality: Write clean, maintainable code following best practices, including unit testing and code reviews. · Troubleshooting: Debug and troubleshoot issues in existing applications to improve performance and reliability. · Documentation: Maintain comprehensive documentation of code and processes. · Collaboration: Work with cross-functional teams to ensure alignment and understanding of project requirements. · AWS Integration: Utilize AWS services (e.g., Lambda, S3) for application deployment and management. · Messaging Services: Experience with Kafka or other messaging services for event-driven architectures and data streaming. · Continuous Improvement: Stay updated with emerging technologies and participate in team knowledge sharing. Skills & Qualifications: · 5+ years of experience in software development with a focus on TypeScript. · Strong knowledge of Nodejs, JavaScript and TypeScript. · Experience with RESTful APIs and GraphQL. · Proficiency in AWS services and cloud-based development. · Understanding of version control systems (e.g., Git) and collaborative workflows. · Strong problem-solving skills and attention to detail. · Excellent communication skills, capable of explaining technical concepts clearly. · Familiarity with agile methodologies and the software development lifecycle. · BA/BS in Computer Science, Engineering, or a related field.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

BACKEND ENGINEER Understanding of Spring AOP, Microservices architecture design and implementation Basic understanding of Microservices Design Pattern such as Circuit Breaker etc Experience with event driven frameworks such as Kafka, RabbitMQ, or IBM MQ Ability to implement container-based APIs using container frameworks like OpenShift, Docker, or Kubernetes. Working experience with Gradle, GIT, GitHub, GitLab, etc. around continuous integration and continuous delivery infrastructure Requirements Requirements Experience of- 5+ years in REST frameworks with focus on API development with Spring Boot. 3+ years in Microservice Architecture based applications. Good Experience in AGILE methodology (Scrum, Lean, SAFE, etc.) 2+ years’ experience integrating with backend services like Kafka, Event Hub , Rabbit MQ , AWS SQS, J2C, ORM frameworks (Hibernate, JPA, JDO, etc), JDBC. Technology Stack Java//J2EE, Spring, Spring Boot, Micro Services, Kafka, OpenShift, Docker, Kubernetes RDBMS databases like Oracle, MS SQL Server, AWS, RDS, Gitlab Benefits Benefits Standard Company Benefits

Posted 1 week ago

Apply

5.0 years

10 - 12 Lacs

Vadodara, Gujarat, India

On-site

Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. 5+ years of professional Java development experience. Proficiency in Java 8+ and knowledge of core Java libraries and design patterns. Experience with Spring Framework (Spring Boot, Spring MVC, Spring Data). Strong understanding of RESTful APIs and microservices architecture. Familiarity with front-end technologies like HTML, CSS, JavaScript (optional). Experience with relational databases such as MySQL, PostgreSQL, or Oracle. Proficiency with version control systems (Git). Familiarity with build tools like Maven or Gradle. Experience working in Agile/Scrum environments. Excellent problem-solving and communication skills. Preferred Qualifications Experience with cloud platforms (AWS, Azure, GCP). Knowledge of containerization tools (Docker, Kubernetes). Experience with CI/CD tools like Jenkins, GitLab CI, or similar. Familiarity with message brokers (Kafka, RabbitMQ). Prior experience in mentoring or leading a team of developers. Skills: javascript,css,spring framework,spring,aws,java 8+,azure,design patterns,kubernetes,boot,spring data,core java libraries,restful apis,microservices architecture,gradle,oracle,agile,spring boot,html,docker,git,jenkins,scrum,spring mvc,mysql,gitlab ci,java,postgresql,gcp,rabbitmq,sql,kafka,maven

Posted 1 week ago

Apply

10.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Description Join us and drive the design and deployment of AI/ML frameworks revolutionizing telecom services. As a key member of our team, you will architect and build scalable, secure AI systems for service assurance, orchestration, and fulfillment, working directly with network experts to drive business impact. You will be responsible for defining architecture blueprints, selecting the right tools and platforms, and guiding cross-functional teams to deliver scalable AI systems. This role offers significant growth potential, mentorship opportunities, and the chance to shape the future of telecoms using the latest AI technologies and platforms. Key Responsibilities HOW YOU WILL CONTRIBUTE AND WHAT YOU WILL LEARN Design end-to-end AI architecture tailored to telecom services business functions (e.g., Service assurance, Orchestration and Fulfilment). Define data strategy and AI workflows including Inventory Model, ETL, model training, deployment, and monitoring. Evaluate and select AI platforms, tools, and frameworks suited for telecom-scale workloads for development and testing of Inventory services solutions Work closely with telecom network experts and Architects to align AI initiatives with business goals. Ensure scalability, performance, and security in AI systems across hybrid/multi-cloud environments. Mentor AI developers Key Skills And Experience You have: 10+ years' experience in AI/ML design and deployment with a Graduation or equivalent degree. Practical Experience on AI/ML techniques and scalable architecture design for telecom operations, inventory management, and ETL. Exposure to data platforms (Kafka, Spark, Hadoop), model orchestration (Kubeflow, MLflow), and cloud-native deployment (AWS Sagemaker, Azure ML). Proficient in programming (Python, Java) and DevOps/MLOps best practices. It will be nice if you had: Worked with any of the LLM models (llama family) and LLM agent frameworks like LangChain / CrewAI / AutoGen Familiarity with telecom protocols, OSS/BSS platforms, 5G architecture, and NFV/SDN concepts. Excellent communication and stakeholder management skills. About Us Come create the technology that helps the world act together Nokia is committed to innovation and technology leadership across mobile, fixed and cloud networks. Your career here will have a positive impact on people’s lives and will help us build the capabilities needed for a more productive, sustainable, and inclusive world. We challenge ourselves to create an inclusive way of working where we are open to new ideas, empowered to take risks and fearless to bring our authentic selves to work What we offer Nokia offers continuous learning opportunities, well-being programs to support you mentally and physically, opportunities to join and get supported by employee resource groups, mentoring programs and highly diverse teams with an inclusive culture where people thrive and are empowered. Nokia is committed to inclusion and is an equal opportunity employer Nokia has received the following recognitions for its commitment to inclusion & equality: One of the World’s Most Ethical Companies by Ethisphere Gender-Equality Index by Bloomberg Workplace Pride Global Benchmark At Nokia, we act inclusively and respect the uniqueness of people. Nokia’s employment decisions are made regardless of race, color, national or ethnic origin, religion, gender, sexual orientation, gender identity or expression, age, marital status, disability, protected veteran status or other characteristics protected by law. We are committed to a culture of inclusion built upon our core value of respect. Join us and be part of a company where you will feel included and empowered to succeed. About The Team As Nokia's growth engine, we create value for communication service providers and enterprise customers by leading the transition to cloud-native software and as-a-service delivery models. Our inclusive team of dreamers, doers and disruptors push the limits from impossible to possible.

Posted 1 week ago

Apply

6.0 - 10.0 years

35 - 38 Lacs

Ahmedabad, Gujarat, India

On-site

The Role: Lead I Software Engineer The Location: Hyderabad/Ahmedabad, India The Team: We are looking for highly motivated, enthusiastic and skilled software engineer with experience in architecting and building solutions to join an agile scrum team developing technology solutions. The team is responsible for developing and ingesting various datasets into the product platforms utilizing latest technologies. The Impact: Contribute significantly to the growth of the firm by:  Developing innovative functionality in existing and new products  Supporting and maintaining high revenue products  Achieve the above intelligently and economically using best practices. What's in it for you:  Build a career with a global company.  Work on products that fuels the global financial markets.  Grow and improve your skills by working on enterprise level products and new technologies. Responsibilities:  Architect, design, and implement software related projects.  Perform analysis and articulate solutions.  Manage and improve existing solutions.  Solve a variety of complex problems and figure out possible solutions, weighing the costs and benefits.  Collaborate effectively with technical and non-technical stakeholders  Active participation in all scrum ceremonies following Agile principles and best practices What We're Looking For: Basic Qualifications:  Bachelor's degree in computer science or equivalent  6 to 10 years' experience in application development  Willingness to learn and apply new technologies.  Excellent communication skills are essential, with strong verbal and writing proficiencies.  Good work ethic, self-starter, and results-oriented  Excellent problem-solving & troubleshooting skills  Ability to manage multiple priorities efficiently and effectively within specific timeframes  Strong hands-on development experience in C#, python  Strong hands on experience in building large scale solutions using big data technology stack like Spark and microservice architecture and tools like Docker and Kubernetes.  Experience in conducting application design and code reviews  Able to demonstrate strong OOP skills  Proficient with software development lifecycle (SDLC) methodologies like Agile, Test- driven development.  Experience implementing Web Services  Have experience working with SQL Server. Ability to write stored procedures, triggers, performance tuning etc  Experience working in cloud computing environments such as AWS Prefered Qualifications:  Experience with large scale messaging systems such as Kafka is a plus  Experience working with Big data technologies like Elastic Search, Spark is a plus  Experience working with Snowflake is a plus  Experience with Linux based environment is a plus

Posted 1 week ago

Apply

0 years

0 Lacs

Lucknow, Uttar Pradesh, India

On-site

Company: Indian / Global Digital Organization Key Skills: Java, Micro Services, Distributed Systems Roles and Responsibilities: Design and implement backend services that manage high-throughput and low-latency workloads. Architect secure and observable APIs and data services ensuring 99.99% availability. Lead integration with external platforms such as Google, Meta, and TikTok, ensuring consistent data synchronization. Drive platform observability and operational excellence through metrics, tracing, and alerting frameworks. Mentor junior engineers and contribute to system-level design and code reviews. Collaborate cross-functionally to deliver features involving machine learning, analytics, and optimization engines. Utilize expertise in backend development within distributed, scalable systems. Work with technologies including Kafka, PostgreSQL, ClickHouse, Redis, S3, and object storage-based designs. Apply SOLID principles, clean code practices, and maintain awareness of infrastructure costs and FinOps. Set up unit/integration tests, CI/CD pipelines, and rollback strategies. Skills Required: Strong experience with Java and Microservices architecture Knowledge of distributed systems and high-performance backend services Familiarity with technologies like Kafka, PostgreSQL, ClickHouse, Redis, and S3 Solid understanding of API development, CI/CD pipelines, and observability tools Practice of clean code, SOLID principles, and cost-aware infrastructure planning Education: B.Tech, M.Tech (Dual), M.Tech, MCA, M.Sc., M.E., CA in Computer Engineering, Computer Science Engineering, or Computer Technology.

Posted 1 week ago

Apply

6.0 years

20 - 25 Lacs

Kolhapur, Maharashtra, India

On-site

Java & Frameworks: 6-10 years of experience in developing applications using Java 8 and above, with strong expertise in Spring Boot, Spring REST, JPA, and Hibernate. Stored Procedures: Experience working with stored procedures in relational databases, ensuring efficient data management and retrieval. Distributed Systems: Experience in building distributed systems that handle user concurrency, reactive programming, and distributed in-memory data grids, with technologies such as Kafka/ActiveMQ and Redis. Cloud & AWS Services: Strong experience in designing and implementing Cloud Native applications, primarily on AWS. Hands-on experience with AWS services including S3, SQS, EC2, and ECS. Agile Methodologies: Proficient in Agile software development practices, including SCRUM or KANBAN. CI/CD Environments: Hands-on experience in Continuous Integration and Continuous Deployment (CI/CD) environments. Backend Development: Expertise in working with RESTful and SOAP services, microservices architecture, and containerization technologies such as Docker and Kubernetes. Containerization: Experience in designing containerized applications using Docker, Kubernetes, and Minikube. Skills: distributed systems,docker,activemq,stored procedures,scrum,aws,jpa,microservices,kubernetes,spring rest,soap services,hibernate,minikube,ci/cd,ecs,redis,angular 10+,kafka,kanban,java 8 and above,agile,sqs,restful services,ec2,spring boot,s3,java 8,multithreading

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Matillion is The Data Productivity Cloud. We are on a mission to power the data productivity of our customers and the world, by helping teams get data business ready, faster. Our technology allows customers to load, transform, sync and orchestrate their data. We are looking for passionate, high-integrity individuals to help us scale up our growing business. Together, we can make a dent in the universe bigger than ourselves. With offices in the UK, US and Spain, we are now thrilled to announce the opening of our new office in Hyderabad, India. This marks an exciting milestone in our global expansion, and we are now looking for talented professionals to join us as part of our founding team. We are now looking for Software Engineers to join #Team Green About the Role Matillion is built around small development teams with responsibility for specific themes and initiatives. Each team is a mix of engineers with various levels of skills and experience. As a Software Engineer you will work within a team to write, test, and release new features and fix problems in the Matillion products, all while innovating on new ideas. Technologies Matillion uses… Java, React, Spring, GraphQL, Docker, Kubernetes, MongoDB, DynamoDB, Kafka, SQL, RESTful services, Cloud Technologies (AWS, GCP, Azure), Agile What you will be doing You’ll spend a significant amount of your time working on production services and applications for Matillion, whilst also collaborating with the broader team to understand and deliver work that contributes to the teams’ goals Responsible for your workflow, you’ll be writing code, unit testing, all the way through to completion and production release, then ongoing maintenance and support Whilst also participating in code reviews, you will be part of research projects, exploring future opportunities and new technologies You’ll have extensive opportunity to develop your technical and interpersonal skills through self-training, collaboration with the broader team, and mentoring, enabling progression through up-skilling to take on more complex tasks By developing an understanding of the teams domain and architecture, you’ll help handle risk, change and uncertainty, contributing to confident decision-making and continually improving ways of working What we are looking for Be proficient in coding in Java, with a good understanding of underpinning techniques of Object-oriented Programming, Programming concepts and best practices (e.g. style guidelines, testability, efficiency, observability, scalability, security) Experience implementing Java Spring microservices, using container technologies such as docker and with relational database technologies, such as Postgres, MySQL, Oracle or SQL Server Background in full software development life cycle from design to deployment via CI/CD tooling, using agile methodologies (e.g. Kanban, Scrum) Familiarity with cloud technologies, strong preference for AWS Ability to collaborate in a cross-functional team to solve business goals, whilst adapting to different types of technical challenges Matillion has fostered a culture that is collaborative, fast-paced, ambitious, and transparent, and an environment where people genuinely care about their colleagues and communities. Our 6 core values guide how we work together and with our customers and partners. We operate a truly flexible and hybrid working culture that promotes work-life balance, and are proud to be able to offer the following benefits: - Company Equity - 27 days paid time off - 12 days of Company Holiday - 5 days paid volunteering leave - Group Mediclaim (GMC) - Enhanced parental leave policies - MacBook Pro - Access to various tools to aid your career development More about Matillion Thousands of enterprises including Cisco, DocuSign, Slack, and TUI trust Matillion technology to load, transform, sync, and orchestrate their data for a wide range of use cases from insights and operational analytics, to data science, machine learning, and AI. With over $300M raised from top Silicon Valley investors, we are on a mission to power the data productivity of our customers and the world. We are passionate about doing things in a smart, considerate way. We’re honoured to be named a great place to work for several years running by multiple industry research firms. We are dual headquartered in Manchester, UK and Denver, Colorado. We are keen to hear from prospective Matillioners, so even if you don’t feel you match all the criteria please apply and a member of our Talent Acquisition team will be in touch. Alternatively, if you are interested in Matillion but don't see a suitable role, please email talent@matillion.com. Matillion is an equal opportunity employer. We celebrate diversity and we are committed to creating an inclusive environment for all of our team. Matillion prohibits discrimination and harassment of any type. Matillion does not discriminate on the basis of race, colour, religion, age, sex, national origin, disability status, genetics, sexual orientation, gender identity or expression, or any other characteristic protected by law.

Posted 1 week ago

Apply

6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Position Title: Software Engineer Consultant / Expert 34326 Location: Chennai (Onsite/Hybrid) Employment Type: Contract Budget: Up to ₹24 LPA (Starting at ₹21 LPA) Notice Period: Immediate Joiners Preferred Assessment: Full Stack Backend Java (via Hacker Platform) Position Overview We are seeking a highly experienced Full Stack Java Developer with strong expertise in backend development, cloud technologies, and data solutions. This role involves building and maintaining a global logistics data warehouse on Google Cloud Platform (GCP) , supporting key supply chain operations and enhancing visibility from production to final delivery. The ideal candidate will have a minimum of 6+ years of relevant experience and hands-on skills in BigQuery, Microservices, and REST APIs , with exposure to tools like Pub/Sub, Kafka, and Terraform . Key Responsibilities Collaborate closely with product managers, architects, and engineers to design and implement technical solutions Develop and maintain full-stack applications using Java, Spring Boot, and GCP Cloud Run Build and optimize ETL/data pipelines to apply business logic and transformation rules Monitor and enhance data warehouse performance on BigQuery Support end-to-end testing: unit, functional, integration, and user acceptance Conduct peer reviews, code refactoring, and ensure adherence to best coding practices Implement infrastructure as code and CI/CD using tools like Terraform Required Skills Java, Spring Boot Full Stack Development (Backend-focused) Google Cloud Platform (GCP) – Minimum 1 year hands-on with BigQuery Cloud Run, Microservices, REST APIs Messaging: Pub/Sub, Kafka DevOps & Infrastructure: Terraform Exposure to AI/ML integration is a plus Experience Requirements Minimum 6+ years of experience in Java/Spring Boot development Strong hands-on experience with GCP services, particularly BigQuery Experience in developing enterprise-grade microservices and backend systems Familiarity with ETL pipelines, data orchestration, and performance tuning Agile team collaboration and modern development practices Preferred Experience Exposure to AI agents or AI-driven application features Experience in large-scale logistics or supply chain data systems Education Requirements Bachelor’s Degree in Computer Science, Information Technology, or related field (mandatory) Skills: rest apis,terraform,full stack development,data,google cloud platform (gcp),microservices,kafka,gcp,bigquery,pub/sub,java,cloud run,boot,spring boot

Posted 1 week ago

Apply

4.0 years

15 - 30 Lacs

Gurugram, Haryana, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

4.0 years

15 - 30 Lacs

Cuttack, Odisha, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

4.0 years

15 - 30 Lacs

Bhubaneswar, Odisha, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

4.0 years

15 - 30 Lacs

Kolkata, West Bengal, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

4.0 years

15 - 30 Lacs

Guwahati, Assam, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

4.0 years

15 - 30 Lacs

Raipur, Chhattisgarh, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

4.0 years

15 - 30 Lacs

Jamshedpur, Jharkhand, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

4.0 years

15 - 30 Lacs

Ranchi, Jharkhand, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

4.0 years

15 - 30 Lacs

Amritsar, Punjab, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

4.0 years

15 - 30 Lacs

Surat, Gujarat, India

Remote

Experience : 4.00 + years Salary : INR 1500000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: NuStudio.AI) (*Note: This is a requirement for one of Uplers' client - AI-first, API-powered Data Platform) What do you need for this opportunity? Must have skills required: Databricks, dbt, Delta Lake, Spark, Unity catalog, AI, Airflow, Cloud Function, Cloud Storage, Databricks Workflows, Dataflow, ETL/ELT, Functions), GCP (BigQuery, Pub/Sub, PySpark, AWS, Hadoop AI-first, API-powered Data Platform is Looking for: We’re scaling our platform and seeking Data Engineers (who are passionate about building high-performance data pipelines, products, and analytical pipelines in the cloud to power real-time AI systems. As a Data Engineer, you’ll: Build scalable ETL/ELT and streaming data pipelines using GCP (BigQuery, Pub/Sub, PySpark, Dataflow, Cloud Storage, Functions) Orchestrate data workflows with Airflow, Cloud Functions, or Databricks Workflows Work across batch + real-time architectures that feed LLMs and AI/ML systems Own feature engineering pipelines that power production models and intelligent agents Collaborate with platform and ML teams to design observability, lineage, and cost-aware performant solutions Bonus: Experience with AWS, Databricks, Hadoop (Delta Lake, Spark, dbt, Unity Catalog) or interest in building on it Why Us? Building production-grade data & AI solutions Your pipelines directly impact mission-critical and client-facing interactions Lean team, no red tape — build, own, ship Remote-first with async culture that respects your time Competitive comp and benefits Our Stack: Python, SQL, GCP/Azure/AWS, Spark, Kafka, Airflow, Databricks, Spark, dbt, Kubernetes, LangChain, LLMs How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies