Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 13.0 years
25 - 40 Lacs
Chennai
Work from Office
Architect & Build Scalable Systems: Design and implement a petabyte-scale lakehouse Architectures to unify data lakes and warehouses. Real-Time Data Engineering: Develop and optimize streaming pipelines using Kafka, Pulsar, and Flink. Required Candidate profile Data engineering experience with large-scale systems• Expert proficiency in Java for data-intensive applications. Handson experience with lakehouse architectures, stream processing, & event streaming
Posted 1 month ago
8.0 - 12.0 years
10 - 20 Lacs
Chennai
Hybrid
Hi [Candidate Name], We are hiring for a Data Engineering role with a leading organization working on cutting-edge cloud and data solutions. If you're an experienced professional looking for your next challenge, this could be a great fit! Key Skills Required: Strong experience in Data Engineering and Cloud Data Pipelines Proficiency in at least 3 languages : Java, Python, Spark, Scala, SQL Hands-on with tools like Google BigQuery, Apache Kafka, Airflow, GCP Pub/Sub Knowledge of Microservices architecture , REST APIs , and DevOps tools (Docker, GitHub Actions, Terraform) Exposure to relational databases : MySQL, PostgreSQL, SQL Server Prior experience in onshore/offshore model is a plus If this sounds like a match for your profile, reply with your updated resume or apply directly. Looking forward to connecting! Best regards, Mahesh Babu M Senior Executive - Recruitment maheshbabu.muthukannan@sacha.solutions
Posted 2 months ago
10.0 - 15.0 years
12 - 16 Lacs
Pune, Bengaluru
Work from Office
We are seeking a talented and experienced Kafka Architect with migration experience to Google Cloud Platform (GCP) to join our team. As a Kafka Architect, you will be responsible for designing, implementing, and managing our Kafka infrastructure to support our data processing and messaging needs, while also leading the migration of our Kafka ecosystem to GCP. You will work closely with our engineering and data teams to ensure seamless integration and optimal performance of Kafka on GCP. Responsibilities: Discovery, analysis, planning, design, and implementation of Kafka deployments on GKE, with a specific focus on migrating Kafka from AWS to GCP. Design, architect and implement scalable, high-performance Kafka architectures and clusters to meet our data processing and messaging requirements. Lead the migration of our Kafka infrastructure from on-premises or other cloud platforms to Google Cloud Platform (GCP). Conduct thorough discovery and analysis of existing Kafka deployments on AWS. Develop and implement best practices for Kafka deployment, configuration, and monitoring on GCP. Develop a comprehensive migration strategy for moving Kafka from AWS to GCP. Collaborate with engineering and data teams to integrate Kafka into our existing systems and applications on GCP. Optimize Kafka performance and scalability on GCP to handle large volumes of data and high throughput. Plan and execute the migration, ensuring minimal downtime and data integrity. Test and validate the migrated Kafka environment to ensure it meets performance and reliability standards. Ensure Kafka security on GCP by implementing authentication, authorization, and encryption mechanisms. Troubleshoot and resolve issues related to Kafka infrastructure and applications on GCP. Ensure seamless data flow between Kafka and other data sources/sinks. Implement monitoring and alerting mechanisms to ensure the health and performance of Kafka clusters. Stay up to date with Kafka developments and GCP services to recommend and implement new features and improvements. Requirements: Bachelors degree in computer science, Engineering, or related field (Masters degree preferred). Proven experience as a Kafka Architect or similar role, with a minimum of [5] years of experience. Deep knowledge of Kafka internals and ecosystem, including Kafka Connect, Kafka Streams, and KSQL. In-depth knowledge of Apache Kafka architecture, internals, and ecosystem components. Proficiency in scripting and automation for Kafka management and migration. Hands-on experience with Kafka administration, including cluster setup, configuration, and tuning. Proficiency in Kafka APIs, including Producer, Consumer, Streams, and Connect. Strong programming skills in Java, Scala, or Python. Experience with Kafka monitoring and management tools such as Confluent Control Center, Kafka Manager, or similar. Solid understanding of distributed systems, data pipelines, and stream processing. Experience leading migration projects to Google Cloud Platform (GCP), including migrating Kafka workloads. Familiarity with GCP services such as Google Kubernetes Engine (GKE), Google Cloud Storage, Google Cloud Pub/Sub, and Big Query. Excellent communication and collaboration skills. Ability to work independently and manage multiple tasks in a fast-paced environment.
Posted 2 months ago
2.0 - 4.0 years
4 - 6 Lacs
Kolkata, Mumbai, New Delhi
Work from Office
Job Overview: We are looking for a skilled and results-driven Java Developer with hands-on experience in Spring Boot, Apache Flink, and Kafka The ideal candidate will be responsible for building and maintaining high-performance backend services and real-time data streaming applications This position is open for immediate joiners and can be based either onsite in Chennai or remotely (WFH), depending on candidate preference, Key Responsibilities: Develop and maintain scalable backend systems using Java and Spring Boot, Design and implement real-time data streaming applications using Apache Flink and Kafka, Build and manage microservices and integrate them with APIs and messaging systems, Collaborate with cross-functional teams to define, design, and deliver new features, Ensure code quality, performance, and security in a production environment, Participate in debugging, troubleshooting, and performance tuning of applications, Must-Have Skills: Minimum 5 years of experience in Java development with Spring Boot, Strong hands-on experience with Apache Flink for stream processing, Proficient in Apache Kafka and event-driven architecture, Solid understanding of RESTful services and microservices architecture, Strong problem-solving and debugging skills, Preferred Skills: Experience with Docker, Kubernetes, or other cloud-native technologies, Familiarity with CI/CD tools and deployment automation, Eligibility: Must be available to join immediately, Open to candidates based onsite in Chennai or working remotely (WFH),
Posted 2 months ago
3 - 6 years
20 - 27 Lacs
Pune
Remote
Data Acquisition & Web Application Developer Experience: 3 - 6 Years Exp Salary : USD 1,851-2,962 / month Preferred Notice Period : Within 30 Days Shift : 10:00AM to 7:00PM IST Opportunity Type: Remote Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : APIS, data acquisition, Web scraping, Agile, Python Good to have skills : Analytics, Monitoring, stream processing, Web application deployment, Node Js GPRO Ltd (One of Uplers' Clients) is Looking for: Data Acquisition & Web Application Developer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description Job Title: Data Acquisition & Web Application Developer About the Project: We are seeking a skilled full-stack developer to build a specialised web application designed to aggregate and present public information on individuals, such as company executives and leaders. This tool will serve as a comprehensive profile generator, pulling data from diverse online sources including news outlets, social media and other platforms. The primary goal is to provide users with a centralised, easily navigable view of a person's online presence, latest news and public information. Project Overview: The core of this project involves developing a robust data acquisition layer capable of scraping and integrating information from various online sources. This data will then be presented through a user-friendly web interface. The application should allow users to input a person's name and receive an aggregated view of relevant public data. Key Responsibilities: Develop and Implement Data Acquisition Layer: Design and build systems to scrape and collect data from specified sources, including news websites (e.g., Bloomberg.com, Reuters, BBC.com, Financial Times), social media (e.g., X, LinkedIn), and media platforms (e.g., YouTube, podcasts). Integrate with APIs: Utilize official APIs (e.g., Bloomberg data, Reuters, FinancialTimes, Google Finance) where available and prioritized. Evaluate and integrate with third-party scraping APIs (e.g., Apify, Oxylabs, SerpApi) as necessary, considering associated risks and subscription models. Handle Hybrid Approach: Implement a strategy that leverages licensed APIs for premium sources while potentially using third-party scrapers for others, being mindful of terms of service and legal/ethical considerations. Direct scraping of highly protected sites like Bloomberg, Reuters, and FT should be avoided or approached with extreme caution using third-party services. Design Data Storage and Indexing: Determine appropriate data storage solutions, considering the volume of data and its relevance over time. Implement indexing and caching mechanisms to ensure efficient search and retrieval of information, supporting near real-time data presentation. Develop Web Application Front-End: Build a basic, functional front-end interface similar to the provided examples ("Opening Screen," "Person profile"). This includes displaying the aggregated information clearly. Implement User Functionality: Enable users to: Input a person's name for searching. Sort displayed outputs by date. Click through links to access the original source of information. Navigate to a new search easily (e.g., via a tab). Consider Stream Processing: Evaluate and potentially implement stream processing techniques for handling near real-time data acquisition and updates. ¢ Ensure Scalability: Design the application to support a specified level of concurrent searches (estimated at 200 for the initial phase). ¢ Build Business Informational Layer: Develop a component that tracks the usage of different data services (APIs, scrapers) for monitoring costs and informing future scaling decisions. ¢ Technical Documentation: Provide clear documentation for the developed system, including data flows, API integrations, and deployment notes. Required Skills and Experience: ¢ Proven experience in web scraping and data acquisition from diverse online sources. ¢ Strong proficiency in developing with APIs, including handling different authentication methods and data formats. ¢ Experience with relevant programming languages and frameworks for web development and data processing (e.g., Python, Node.js, etc.). ¢ Knowledge of database design and data storage solutions. ¢ Familiarity with indexing and caching strategies for search applications. ¢ Understanding of potential challenges in web scraping (e.g., anti-scraping measures, terms of service). ¢ Experience in building basic web application front-ends. ¢ Ability to consider scalability and performance in system design. ¢ Strong problem-solving skills and ability to work independently or as part of a small team. ¢ Experience working with foreign (western based) startups and clients. Ability to work in agile environments and ability to pivot fast. Desirable Skills: ¢ Experience with stream processing technologies. ¢ Familiarity with deploying and managing web applications (though infrastructure design is flexible). ¢ Experience with monitoring and analytics for application usage. How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: A web app aggregating real-time info on individuals for financial services professionals About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 months ago
8 - 13 years
25 - 30 Lacs
Bengaluru
Work from Office
Education: A Bachelors degree in Computer Science, Engineering (B.Tech, BE), or a related field such as MCA (Master of Computer Applications) is required for this role. Experience: 8+ years in data engineering with a focus on building scalable and reliable data infrastructure. Skills: Language: Proficiency in Java or Python or Scala. Prior experience in Oil Gas, Titles Leases, or Financial Services is a must have. Databases: Expertise in relational and NoSQL databases like PostgreSQL, MongoDB, Redis, and Elasticsearch. Data Pipelines: Strong experience in designing and implementing ETL/ELT pipelines for large datasets. Tools: Hands-on experience with Databricks, Spark, and cloud platforms. Data Lakehouse: Expertise in data modeling, designing Data Lakehouses, and building data pipelines. Modern Data Stack: Familiarity with modern data stack and data governance practices. Data Orchestration: Proficient in data orchestration and workflow tools. Data Modeling: Proficient in modeling and building data architectures for high-throughput environments. Stream Processing: Extensive experience with stream processing technologies such as Apache Kafka. Distributed Systems: Strong understanding of distributed systems, scalability, and availability. DevOps: Familiarity with DevOps practices, continuous integration, and continuous deployment (CI/CD). Problem-Solving: Strong problem-solving skills with a focus on scalable data infrastructure. Key Responsibilities: This is a role with high expectations of hands on design and development. Design and development of systems for ingestion, persistence, consumption, ETL/ELT, versioning for different data types e.g. relational, document, geospatial, graph, timeseries etc. in transactional and analytical patterns. Drive the development of applications related to data extraction, especially from formats like TIFF, PDF, and others, including OCR and data classification/categorization. Analyze and improve the efficiency, scalability, and reliability of our data infrastructure. Assist in the design and implementation of robust ETL/ELT pipelines for processing large volumes of data. Collaborate with cross-functional scrum teams to respond quickly and effectively to business needs. Work closely with data scientists and analysts to define data requirements and develop comprehensive data solutions. Implement data quality checks and monitoring to ensure data integrity and reliability across all systems. Develop and maintain data models, schemas, and documentation to support data-driven decision-making. Manage and scale data infrastructure on cloud platforms, leveraging cloud-native tools and services. Benefits: Salary: Competitive and aligned with local standards. Performance Bonus: According to company policy. Benefits: Includes medical insurance and group term life insurance. Continuous learning and development.10 recognized public holidays. Parental Leave
Posted 2 months ago
4 - 6 years
15 - 22 Lacs
Gurugram
Hybrid
The Job We are looking out for a Sr Data Engineer responsible to Design, Develop and Support Real Time Core Data Products to support TechOps Applications. Work with various teams to understand business requirements, reverse engineer existing data products and build state of the art performant data pipelines. AWS is the cloud of choice for these pipelines and a solid understand and experience of architecting , developing and maintaining real time data pipelines in AWS Is highly desired. Design, Architect and Develop Data Products that provide real time core data for applications. Production Support and Operational Optimisation of Data Projects including but not limited to Incident and On Call Support , Performance Optimization , High Availability and Disaster Recovery. Understand Business Requiremensts interacting with business users and or reverse engineering existing legacy data products. Mentor and train junior team members and share architecture , design and development knowdge of data products and standards. Mentor and train junior team members and share architecture , design and development knowdge of data products and standards. Good understand and working knowledge of distributed databases and pipelines. Your Profile An ideal candidate will have 4+ yrs of experience in Real Time Streaming along with hands on Spark, Kafka, Apache Flink, Java, Big data technologies, AWS and MSK (managed service kafka) AWS Distrubuited Database technologies including Managed Services Kafka, Managed Apache Flink, DynamoDB, S3, Lambda. Experience designing and developing with Apache Flink real time data products.(Scala experience can be considered) Experience with python and pyspark SQL Code Development AWS Solutions Architecture experience for data products is required Manage, troubleshoot, real time data pipelines in the AWS Cloud Experience with High Availability and Disaster Recovery Solutions for Real time data streaming Excellent Analytical, Problem solving and Communication Skills Must be self-motivated, and ability to work independently Ability to understand existing SQL and code and user requirements and translate them into modernized data products.
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough