Jobs
Interviews

3311 Big Data Jobs - Page 21

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 13.0 years

11 - 16 Lacs

Noida

Work from Office

Essential Skills/Basic Qualifications: 8+ year experienced DevOps professionals with in-depth Devops experience ( teamcity , nolio / jenkins , PowerShell scripting, Gitlab) and any add on tech Excellent experience in managing environments and DevOps tools. Knowledge and proven experience in one of the scripting languages (preferably PowerShell, Python, Bash). Knowledge and strong practical experience with CI/CD pipelines - Git, GitLab, deployment tools, Nexus, JIRA, TeamCity/Jenkins, Infrastructure-as-a-Code (Chef, or alternatives) Good knowledge of Unix systems and basic understanding of Windows servers Experience in troubleshooting Database issues. Knowledge of middleware tools like Solace, MQ Good collaboration skills Excellent verbal and written communication skills. Open to work in UK Shift (10 Am to 9 Pm IST with 8 hours of productive work) Desirable skills/Preferred Qualifications: Good understanding of ITIL concepts (IPC, ServiceFirst) Experience with Ansible, other Configuration Management tools. Experience with Monitoring Observability solutions Elastic Search stack, Grafana, AppD. Experience with AWS or any other major Public Cloud service, as well as Private Cloud solutions, Big Data (Hadoop) Knowledge and practical experience with any database - MS SQL Server, Oracle, MongoDB, other DBMS. Strong muti-tasking and ability to re-prioritize activities, based on ever-changing external requirements. Stress resistance and ability to work and deliver results under pressure from the numerous parties. Creative thinking and problem solving, mindset oriented towards continuous improvement and delivering service of excellent quality. As a DevOps Engineer, responsibilities include: Work with technical leads and developers to understand applications architecture and business logic and contribute to deployment and integration strategies. Environment Management functions (building new environments refreshing existing ones). Providing operational stability for the key environments. Diagnosis and resolution of environment defects found during SIT and UAT test phases. Engineering DevOps Project Work, e.g., optimization of DevOps delivery (CI/CD). Contribute to the design of lightning-fast DevOps processes, including automated build, release, deployments, and monitoring, to reduce the overall time to market of various new and existing software components. Contributing to the delivery of complex projects in collaboration with global teams across Barclays, to develop new or enhance existing systems. Strong appreciation of development and DevOps best practices. Education Qualification Bachelors degree in Computer Science/Engineering or related field or equivalent professional qualification Mandatory Competencies DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Jenkins DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - GitLab,Github, Bitbucket Cloud - Azure - Azure Bicep, ARM Templates, Powershell Development Tools and Management - Development Tools and Management - CI/CD Operating System - Operating System - Unix Database - Sql Server - SQL Packages Database - Oracle - Database Design Beh - Communication and collaboration

Posted 2 weeks ago

Apply

12.0 - 15.0 years

70 - 150 Lacs

Bengaluru

Work from Office

Data Engineering Manager Experience: 12 - 15 Years Exp. Salary : Competitive Preferred Notice Period : Within 30 Days Shift : 10:00AM to 7:00PM IST Opportunity Type: Onsite (Bengaluru) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Airflow, Hadoop, Kafka, Python, Spark, SQL, ETL onequince.com (One of Uplers' Clients) is Looking for: Data Engineering Manager who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description Year of Experience: 12 - 15 years. Responsibilities Manage and grow a cross-functional data-engineering team responsible for both Data Products & Platform Mentor a group of highly skilled junior & senior engineers by providing technical management, guidance, coaching, best practices and principles Active participation in the design, development, delivery, and maintenance of data products & platform Responsible for OKR planning, resource planning, execution, and quality of the data products & tools delivered by the group Lead the group to build sophisticated products using cutting-edge cloud native technology stack. The products will be used by various internal customers like Data Analysts, ML Engineers & Data Scientists, Management & even external stakeholders Work closely with various business stakeholders like Product Management Team, and other Engineering teams to drive the execution of multiple business strategies and technologies Act as a point of contact for TCO (total cost of ownership) for Data Platform(Ingestion, Processing, Extraction & Governance) Manage and drive production defects & stability improvements to resolution Ensure operational efficiency and actively participate in organizational initiatives with the objective of ensuring the highest customer value Tailor processes to help manage time-sensitive issues and bring them to appropriate closure Must Have: First hand exposure in managing large scale data ingestion, processing, extraction & governance processes. Experience in Big Data technologies(e.g. Apache Hadoop, Spark, Hive, Presto) Experience with Message Queues(e.g. Apache Kafka, Kinesis, RabbitMQ) Experience with Stream Processing technologies (e.g. Spark Streaming, Flink) Proficiency in at least one of the following programming languages - Python, Java or Scala. Experience in building Highly Available, Fault Tolerant REST services preferably for data ingestion or serving. Good understanding of traditional data-warehousing fundamentals. Good exposure to SQL (T-SQL/PL-SQL/SPARK-SQL/HIVE-QL). Experience with integrating data across multiple data sources. Good understanding of distributed computing principles. Strong analytical/quantitative skills and ability to connect Data with business outcomes. Good To Have: Experience with MPP data warehouses (e.g. Snowflake, Redshift). Experience with any NoSQL storage(e.g. Redis, DynamoDB, Memcache). How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Quince is an affordable luxury brand that sells high-quality fashion and home goods at radically low prices direct from the factory floor. The company has pioneered a manufacturer-to-consumer (M2C) retail model in which factories produce inventory on a near just-in-time basis and ship their goods directly to consumers' doorsteps, cutting out financial and environmental waste. Quince is headquartered in San Francisco, CA and partners with more than 50 top manufacturers around the world. Most recently, Quince completed a $77 million Series B upround raise. The investment was led by Wellington Management with participation from GGV Capital, and continuing participation from Basis Set Ventures, Insight Partners, Lugard Road, and 8VC. About Uplers: Uplers is the #1 hiring platform for SaaS companies, designed to help you hire top product and engineering talent quickly and efficiently. Our end-to-end AI-powered platform combines artificial intelligence with human expertise to connect you with the best engineering talent from India. With over 1M deeply vetted professionals, Uplers streamlines the hiring process, reducing lengthy screening times and ensuring you find the perfect fit. Companies like GitLab, Twilio, TripAdvisor, and AirBnB trust Uplers to scale their tech and digital teams effectively and cost-efficiently. Experience a simpler, faster, and more reliable hiring process with Uplers today. Principal Data Engineer - Big Data & Cloud

Posted 2 weeks ago

Apply

12.0 - 15.0 years

70 - 150 Lacs

Bengaluru

Work from Office

Data Engineering Manager Experience: 12 - 15 Years Exp. Salary : Competitive Preferred Notice Period : Within 30 Days Shift : 10:00AM to 7:00PM IST Opportunity Type: Onsite (Bengaluru) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Airflow, Hadoop, Kafka, Python, Spark, SQL, ETL onequince.com (One of Uplers' Clients) is Looking for: Data Engineering Manager who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description Year of Experience: 12 - 15 years. Responsibilities Manage and grow a cross-functional data-engineering team responsible for both Data Products & Platform Mentor a group of highly skilled junior & senior engineers by providing technical management, guidance, coaching, best practices and principles Active participation in the design, development, delivery, and maintenance of data products & platform Responsible for OKR planning, resource planning, execution, and quality of the data products & tools delivered by the group Lead the group to build sophisticated products using cutting-edge cloud native technology stack. The products will be used by various internal customers like Data Analysts, ML Engineers & Data Scientists, Management & even external stakeholders Work closely with various business stakeholders like Product Management Team, and other Engineering teams to drive the execution of multiple business strategies and technologies Act as a point of contact for TCO (total cost of ownership) for Data Platform(Ingestion, Processing, Extraction & Governance) Manage and drive production defects & stability improvements to resolution Ensure operational efficiency and actively participate in organizational initiatives with the objective of ensuring the highest customer value Tailor processes to help manage time-sensitive issues and bring them to appropriate closure Must Have: First hand exposure in managing large scale data ingestion, processing, extraction & governance processes. Experience in Big Data technologies(e.g. Apache Hadoop, Spark, Hive, Presto) Experience with Message Queues(e.g. Apache Kafka, Kinesis, RabbitMQ) Experience with Stream Processing technologies (e.g. Spark Streaming, Flink) Proficiency in at least one of the following programming languages - Python, Java or Scala. Experience in building Highly Available, Fault Tolerant REST services preferably for data ingestion or serving. Good understanding of traditional data-warehousing fundamentals. Good exposure to SQL (T-SQL/PL-SQL/SPARK-SQL/HIVE-QL). Experience with integrating data across multiple data sources. Good understanding of distributed computing principles. Strong analytical/quantitative skills and ability to connect Data with business outcomes. Good To Have: Experience with MPP data warehouses (e.g. Snowflake, Redshift). Experience with any NoSQL storage(e.g. Redis, DynamoDB, Memcache). How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Quince is an affordable luxury brand that sells high-quality fashion and home goods at radically low prices direct from the factory floor. The company has pioneered a manufacturer-to-consumer (M2C) retail model in which factories produce inventory on a near just-in-time basis and ship their goods directly to consumers' doorsteps, cutting out financial and environmental waste. Quince is headquartered in San Francisco, CA and partners with more than 50 top manufacturers around the world. Most recently, Quince completed a $77 million Series B upround raise. The investment was led by Wellington Management with participation from GGV Capital, and continuing participation from Basis Set Ventures, Insight Partners, Lugard Road, and 8VC. About Uplers: Uplers is the #1 hiring platform for SaaS companies, designed to help you hire top product and engineering talent quickly and efficiently. Our end-to-end AI-powered platform combines artificial intelligence with human expertise to connect you with the best engineering talent from India. With over 1M deeply vetted professionals, Uplers streamlines the hiring process, reducing lengthy screening times and ensuring you find the perfect fit. Companies like GitLab, Twilio, TripAdvisor, and AirBnB trust Uplers to scale their tech and digital teams effectively and cost-efficiently. Experience a simpler, faster, and more reliable hiring process with Uplers today. Principal Data Engineer - Big Data & Cloud

Posted 2 weeks ago

Apply

2.0 - 5.0 years

3 - 3 Lacs

Pune

Remote

We are seeking a highly skilled Analyst -Big Data Developer - to join our dynamic team. The ideal candidate will have extensive experience with big data technologies and a strong background in developing and optimizing data integration frameworks and applications. You will be responsible for designing, implementing, and maintaining robust data solutions in a cloud environment. Required Skills and Qualifications : Education bachelor's degree in engineering, Computer Science, or a related field, or equivalent qualification. Experience Minimum of 2 to 5 years of experience in a recognized global IT services or consulting company, with hands-on expertise in big data technologies. Big Data Technologies Over 2 years of experience with Hadoop ecosystem, Apache Spark, and associated tools. Experience with modern big data technologies and frameworks such as Spark, Impala, and Kafka. Programming Proficiency in Java, Scala, and Python with the ability to code in multiple languages. Cloud Platforms Experience with cloud platforms, preferably GCP. Linux Environment At least 2 years of experience working in a Linux environment, including system tools, scripting languages, and integration frameworks. Schema Design Extensive experience applying schema design principles and best practices to big data technologies. Hadoop Distributions Knowledge of Hadoop distributions such as EMR, Cloudera, or Hortonworks. Preferred Skills: Experience with additional big data tools and technologies. Certification in relevant big data or cloud technologies. Big Data Technologies Over 2 years of experience with Hadoop ecosystem, Apache Spark, and associated tools

Posted 2 weeks ago

Apply

5.0 - 10.0 years

0 - 0 Lacs

Pune, Gurugram, Bengaluru

Hybrid

Data Engineer Experience- 5 to 9 years Location: Pune Skillset is below: Candidates should have data engineering experience with Spark. Language can be anything between Scala/Python/Java. Only experienced spark developers can be trained on Scala.

Posted 2 weeks ago

Apply

4.0 - 6.0 years

9 - 13 Lacs

Bengaluru

Work from Office

WHAT THE ROLE OFFERS: Developing understanding of customer deployment environment. Own and drive high quality resolution of reported current product technical issues. Provide high quality defect fixes and new features. Ensure timely release of high quality product patches. Contribute towards patch process improvement initiatives. Deliver excellent customer experience. Provide on call deployment support of Network Product family. Technically leading and grooming junior team members. Communicate high impact product issues and customer experiences to development team in order to improve quality of future product release Reviews and evaluates designs to ensure long term adaptability and sustainability. Champions processes that ensure very high quality product releases Encourages and contributes to innovation that is aligned with the business Provides R&D inputs into the product/suite functional roadmap and represents the team in the wider architect community WHAT YOU NEED TO SUCCEED: Seven or more years of software development experience including Java, J2EE, JBoss, JMS, JMX,Java script, Spring, Hibernate, XML, SQL, Oracle, Perl, VBS, and shell scripting languages Strong troubleshooting, problem solving and analytical skills with the ability to clearly communicate (both written & verbal) and share solution with CPE and development engineers, Support and customers Ability to multi-task and work under high pressure environment Experience working directly with challenging, demanding support engineers and customers Experience with network management, network devices (routers, switches, firewalls, etc.) and networking technologies (SNMP, ICMP, HSRP, VRRP, CDP, LLDP, etc.) Development and troubleshooting experience with Linux, Windows server operating systems Strong teamwork Excellent communication skills Excellent analytical and problem solving skills. Preferred experience in enterprise product requirements such as security, high scale, multi-tenancy, high availability, supportability etc. Desirable expertise in Docker, Kubernetes, micro-services based architecture Education and Experience: Bachelor's or Master's degree in Computer Science, Information Systems, or equivalent. Typically, 7 plus years of experience.

Posted 2 weeks ago

Apply

12.0 - 21.0 years

8 - 12 Lacs

Chennai

Work from Office

Project Overview The candidate will be working on the Model Development as a Service (MDaaS) initiative, Which focuses on scaling machine learning techniques for exception classification, early warning signals, Data quality control, model surveillance, and missing value imputation. The project involves applying advanced ML techniques to large datasets and integrating them into financial analytics systems. Key Responsibilities Set up Data Pipelines: Configure storage in cloud-based compute environments and repositories for large-scale data ingestion and processing. Develop and Optimize Machine Learning Models: Implement Machine Learning for Exception Classification (MLEC) to classify financial exceptions. Conduct Missing Value Imputation using statistical and ML-based techniques. Develop Early Warning Signals for detecting anomalies in multi-variate/univariate time-series financial data. Build Model Surveillance frameworks to monitor financial models. Apply Unsupervised Clustering techniques for market segmentation in securities lending. Develop Advanced Data Quality Control frameworks using TensorFlow-based validation techniques. Experimentation & Validation: Evaluate ML algorithms using cross-validation and performance metrics. Implement data science best practices and document findings. Data Quality and Governance: Develop QC mechanisms to ensure high-quality data processing and model outputs. Required Skillset Strong expertise in Machine Learning & AI (Supervised & Unsupervised Learning). Proficiency in Python, TensorFlow, SQL, and Jupyter Notebooks. Deep understanding of time-series modeling, anomaly detection, and risk analytics. Experience with big data processing and financial data pipelines. Ability to deploy scalable ML models in a cloud environment. Deliverables & Timeline Machine Learning for Exception Classification (MLEC): Working codes & documentation Missing Value Imputation: Implementation & validation reports Early Warning Signals: Data onboarding & anomaly detection models Model Surveillance: Fully documented monitoring framework Securities Lending: Clustering algorithms for financial markets Advanced Data QC: Development of a general-purpose QC library Preferred Qualifications Prior experience in investment banking, asset management, or trading desks. Strong foundation in quantitative finance and financial modeling. Hands-on experience with TensorFlow, PyTorch, and AWS/GCP AI services

Posted 2 weeks ago

Apply

5.0 - 6.0 years

9 - 14 Lacs

Noida

Work from Office

Solid understanding of object-oriented programming and design patterns. 5 to 6 Years of strong experience with bigdata. Comfortable working with large data volumes and able to demonstrate a firm understanding of logical data structures and analysis techniques. Experience in Big data technologies like HDFS, Hive, HBase, Apache Spark, Pyspark & Kafka Proficient in code versioning tools, such as Git, BitBucket, and Jira Strong systems analysis, design and architecture fundamentals, Unit Testing, and other SDLC activities Experience in working on Linux shell scripting. Demonstrated analytical and problem-solving skills. Excellent troubleshooting and debugging skills. Strong communication and aptitude. Ability to write reliable, manageable, and high-performance code. Good knowledge of database principles, practices, and structures, including SQL development experience, preferably with Oracle. Understanding fundamental design principles behind a scalable application. Basic Unix OS and scripting knowledge. Good to have: Financial markets background is preferable but is not a must. Experience in Jenkins, Scala, Autosys. Familiarity with build tools such as Maven and continuous integration. Candidates with working knowledge of Docker Kubernetes OpenShift Mesos is a plus. Have basic experience in Data Preparation Tools Experience with CI/CD build pipelines. Mandatory Competencies Big Data - Big Data - HDFS Big Data - Big Data - HIVE Big Data - Big Data - Hadoop Big Data - Big Data - Pyspark Beh - Communication Data Science and Machine Learning - Data Science and Machine Learning - Apache Spark

Posted 2 weeks ago

Apply

4.0 - 8.0 years

20 - 25 Lacs

Noida

Work from Office

Key Responsibilities Senior professional level role and hands-on enterprise level architect/ solution leader with deep experience in Data Engineering technologies and on public cloud like AWS Azure/ GCP Engage with client managers to understand their current state, business problems/ opportunities, conceptualize solution options, discuss and finalize with client stakeholders, help bootstrap a team and deliver PoCs/PoTs/MVP etc. Help build overall competency within teams working in related client engagements and rest of Iris in Data & Analytics, including Data Engineering, Analytics, Data Science, AI/ML, ML and Data Ops, Data Governance etc. related solution patterns, platforms, tools and technology. Staying up to date in the field regarding best practices, new and emerging tools, and trends in the Data and Analytics Focus on building practice competencies on Data & Analytics Professional Experience Qualifications Bachelors degree Masters degree in a Software discipline Experience w.r.t. Data architecture, Implementation of large scale Enterprise-level Data Lake/Data Warehousing, Big Data and Analytics applications. Professional with a background in Data Engineering, should have led multiple engagements in Data Engineering in terms of solutioning, architecture and delivery. Excellent English communication both written and verbal Technology o For the above skill areas, must have lifecycle experience on some of the tools such as AWS Glue Redshift Azure Data lake Databricks Snowflake , etc. o Database experience and programming experience on Spark- Spark SQL, PySpark, Python etc.

Posted 2 weeks ago

Apply

7.0 - 12.0 years

16 - 20 Lacs

Noida

Work from Office

Technical Expertise: Must Have: o Experience in Emma orchestration engine is a must o Proficient in Python programming with experience in agentic platforms from procode (e.g., Autogen, Semantic Kernel, LangGraph) to low code (e.g., Crew.ai, EMA.ai). o Hands-on experience with Azure open AI and related tools and services. o Fluent in GenAI packages like Llamaindex and Langchain. Soft Skills: Excellent communication and collaboration skills, with the ability to work effectively with stakeholders across business and technical teams. Strong problem-solving and analytical skills. Attention to detail. Ability to work with teams in a dynamic, fast-paced environment. Experience: 7 + years of experience in software development, with 3+ years in AI/ML or Generative AI projects. Demonstrated experience in deploying and managing AI applications in production environments. Key Responsibilities: Design, develop, and implement complex Generative AI solutions with high accuracy and for complex use cases. Utilize agentic platforms from procode (e.g., Autogen, Semantic Kernel, LangGraph) to low code (e.g., Crew.ai, EMA.ai). Leverage Azure OpenAI ecosystems and tooling, including training models, advanced prompting, Assistant API, and agent curation. Write efficient, clean, and maintainable Python code for AI applications. Develop and deploy RESTful APIs using frameworks like Flask or Django for model integration and consumption. Fine-tune and optimize AI models for business use cases. Mandatory Competencies Data Science and Machine Learning - Data Science and Machine Learning - Gen AI Programming Language - Python - Flask Programming Language - Python - Django Data Science and Machine Learning - Data Science and Machine Learning - Python Data Science and Machine Learning - Data Science and Machine Learning - AI/ML Cloud - Azure - Azure Data Factory (ADF), Azure Databricks, Azure Data Lake Storage, Event Hubs, HDInsight Data Science and Machine Learning - Data Science and Machine Learning - Azure ML Beh - Communication and collaboration

Posted 2 weeks ago

Apply

8.0 - 10.0 years

25 - 40 Lacs

Pune

Hybrid

Key Skills: Solution Architecture, Google Cloud, Python, Java, Big Data, Apache Spark, Agile Delivery, Scrum, Kanban, CI/CD, DevOps, Cloud Solutions, Software Development, Testing, Production Support. Roles & Responsibilities: Own and manage end-to-end technical deliveries of products within your agile team. Lead technical deliveries for the agile team and the product. Manage activities related to design and development (CTB) as well as production processing support (RTB). Provide support across the full delivery lifecycle, including software development, testing, and operational support, adapting to demand. Create robust technical designs and development strategies for new components to meet requirements. Develop test plans, including unit and integration tests within automated test environments to ensure code quality. Collaborate with Ops, Dev, and Test Engineers to identify and address operational issues (e.g., performance, operator intervention, alerting, design defects). Ensure service resilience, sustainability, and recovery time objectives are met for all software solutions. Actively drive mandatory exercises related to resilience, recovery, and service management. Ensure compliance with end-to-end controls for products and data, including effective risk and control management (non-financial risks, compliance, and conduct responsibilities). Adhere to standard processes and ensure compliance with relevant regulations and policies. Experience Requirements: 8-10 years of experience in track record of designing and developing complex products, both on cloud and on-premise, including solution architecture, design, build, testing, and production. Experience in designing and implementing scalable solutions on Google Cloud. Proficiency in Python or any mainstream programming language such as Java. Good understanding of Big Data technologies such as Apache Spark and related technologies. Experience with Agile delivery methodologies (e.g., Scrum, Kanban). Participate in continuous improvement and transformation towards Agile, DevOps, CI/CD, and improving productivity. Excellent communication and interpersonal skills, demonstrating teamwork and collaboration. Education: B.Tech M.Tech (Dual), B.Tech, M. Tech.

Posted 2 weeks ago

Apply

7.0 - 10.0 years

8 - 12 Lacs

Bengaluru

Work from Office

Education Qualification : BE/B Tech Requirement : Immediate or Max 15 days Job Description : Big Data Developer (Hadoop/Spark/Kafka) - This role is ideal for an experienced Big Data developer who is confident in taking complete ownership of the software development life cycle - from requirement gathering to final deployment. - The candidate will be responsible for engaging with stakeholders to understand the use cases, translating them into functional and technical specifications (FSD & TSD), and implementing scalable, efficient big data solutions. - A key part of this role involves working across multiple projects, coordinating with QA/support engineers for test case preparation, and ensuring deliverables meet high-quality standards. - Strong analytical skills are necessary for writing and validating SQL queries, along with developing optimized code for data processing workflows. - The ideal candidate should also be capable of writing unit tests and maintaining documentation to ensure code quality and maintainability. - The role requires hands-on experience with the Hadoop ecosystem, particularly Spark (including Spark Streaming), Hive, Kafka, and Shell scripting. - Experience with workflow schedulers like Airflow is a plus, and working knowledge of cloud platforms (AWS, Azure, GCP) is beneficial. - Familiarity with Agile methodologies will help in collaborating effectively in a fast-paced team environment. - Job scheduling and automation via shell scripts, and the ability to optimize performance and resource usage in a distributed system, are critical. - Prior experience in performance tuning and writing production-grade code will be valued. - The candidate must demonstrate strong communication skills to effectively coordinate with business users, developers, and testers, and to manage dependencies across teams. Key Skills Required : Must Have : - Hadoop, Spark (core & streaming), Hive, Kafka, Shell Scripting, SQL, TSD/FSD documentation. Good to Have : - Airflow, Scala, Cloud (AWS/Azure/GCP), Agile methodology.

Posted 2 weeks ago

Apply

7.0 - 9.0 years

6 - 10 Lacs

Chennai

Work from Office

As a Technical Lead - Cloud Data Platform (AWS) at Incedo, you will be responsible for designing, deploying and maintaining cloud-based data platforms on the AWS platform. You will work with data engineers, data scientists and business analysts to understand business requirements and design scalable, reliable and cost-effective solutions that meet those requirements. Roles & Responsibilities: Designing, developing and deploying cloud-based data platforms using Amazon Web Services (AWS) Integrating and processing large amounts of structured and unstructured data from various sources Implementing and optimizing ETL processes and data pipelines Developing and maintaining security and access controls Collaborating with other teams to ensure the consistency and integrity of data Troubleshooting and resolving data platform issues Technical Skills Skills Requirements: In-depth knowledge of AWS services and tools such as AWS Glue, AWS Redshift, and AWS Lambda Experience in building scalable and reliable data pipelines using AWS services, Apache Spark, and related big data technologies Familiarity with cloud-based infrastructure and deployment, specifically on AWS Strong knowledge of programming languages such as Python, Java, and SQL Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Should be open to new ideas and be willing to learn and develop new skills. Should also be able to work well under pressure and manage multiple tasks and priorities. Nice-to-have skills Qualifications Qualifications 7-9 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred

Posted 2 weeks ago

Apply

4.0 - 5.0 years

2 - 6 Lacs

Kozhikode

Work from Office

Key Responsibilities : - Conduct feature engineering, data analysis, and data exploration to extract valuable insights. - Develop and optimize Machine Learning models to achieve high accuracy and performance. - Design and implement Deep Learning models, including Artificial Neural Networks (ANN), Convolutional Neural Networks (CNN), and Reinforcement Learning techniques. - Handle real-time imbalanced datasets and apply appropriate techniques to improve model fairness and robustness. - Deploy models in production environments and ensure continuous monitoring, improvement, and updates based on feedback. - Collaborate with cross-functional teams to align ML solutions with business goals. - Utilize fundamental statistical knowledge and mathematical principles to ensure the reliability of models. - Bring in the latest advancements in ML and AI to drive innovation. Requirements : - 4-5 years of hands-on experience in Machine Learning and Deep Learning. - Strong expertise in feature engineering, data exploration, and data preprocessing. - Experience with imbalanced datasets and techniques to improve model generalization. - Proficiency in Python, TensorFlow, Scikit-learn, and other ML frameworks. - Strong mathematical and statistical knowledge with problem-solving skills. - Ability to optimize models for high accuracy and performance in real-world scenarios. Preferred Qualifications : - Experience with Big Data technologies (Hadoop, Spark, etc.) - Familiarity with containerization and orchestration tools (Docker, Kubernetes). - Experience in automating ML pipelines with MLOps practices. - Experience in model deployment using cloud platforms (AWS, GCP, Azure) or MLOps tools.

Posted 2 weeks ago

Apply

8.0 - 13.0 years

22 - 37 Lacs

Chennai, Bengaluru

Hybrid

Experience: 8+ years of experience in data architecture and data engineering roles. Proven experience leading large-scale data migration projects, preferably to cloud environments (Alibaba Cloud, AWS, Azure, or GCP). 3+ years of hands-on experience with Alibaba Clouds DataWorks platform or similar data management tools. Strong background in data modeling, ETL design, and data integration across various platforms. Technical Skills: Deep understanding of cloud architecture, particularly in Alibaba Cloud’s ecosystem (MaxCompute, DataWorks, OSS, etc.). Proficiency in SQL, Python, Java, or Scala for data engineering tasks. Familiarity with data processing engines such as Apache Spark, Flink, or other big data tools. Experience with data governance tools and practices, including data cataloging, data lineage, and metadata management. Strong understanding of data integration and movement between different storage systems (databases, data lakes, data warehouses). Strong understanding of API integration for data ingestion, including RESTful services and streaming data. Experience in data migration strategies, tools, and frameworks for moving data from legacy systems (on-premises) to cloud-based solutions. Communication & Leadership: Excellent communication skills to collaborate with both technical teams and business stakeholders. Proven ability to lead, mentor, and guide technical teams during complex projects.

Posted 2 weeks ago

Apply

8.0 - 12.0 years

30 - 35 Lacs

Chennai

Work from Office

Technical Skills Experience building data transformation pipelines using DBT and SSIS Moderate programming experience with Python Moderate experience with AWS Glue Strong experience with SQL and ability to write efficient code and manage it through GIT repositories Nice-to-have skills Experience working with SSIS Experience working in a Wealth management industry Experience in agile development methodologies

Posted 2 weeks ago

Apply

2.0 - 3.0 years

5 - 9 Lacs

Kochi, Coimbatore, Thiruvananthapuram

Work from Office

Location:Kochi, Coimbatore, Trivandrum Must have skills:Python/Scala, Pyspark/Pytorch Good to have skills:Redshift Job Summary Youll capture user requirements and translate them into business and digitally enabled solutions across a range of industries. Your responsibilities will include: Roles and Responsibilities Designing, developing, optimizing, and maintaining data pipelines that adhere to ETL principles and business goals Solving complex data problems to deliver insights that helps our business to achieve their goals. Source data (structured unstructured) from various touchpoints, format and organize them into an analyzable format. Creating data products for analytics team members to improve productivity Calling of AI services like vision, translation etc. to generate an outcome that can be used in further steps along the pipeline. Fostering a culture of sharing, re-use, design and operational efficiency of data and analytical solutions Preparing data to create a unified database and build tracking solutions ensuring data quality Create Production grade analytical assets deployed using the guiding principles of CI/CD. Professional and Technical Skills Expert in Python, Scala, Pyspark, Pytorch, Javascript (any 2 at least) Extensive experience in data analysis (Big data- Apache Spark environments), data libraries (e.g. Pandas, SciPy, Tensorflow, Keras etc.), and SQL. 2-3 years of hands-on experience working on these technologies. Experience in one of the many BI tools such as Tableau, Power BI, Looker. Good working knowledge of key concepts in data analytics, such as dimensional modeling, ETL, reporting/dashboarding, data governance, dealing with structured and unstructured data, and corresponding infrastructure needs. Worked extensively in Microsoft Azure (ADF, Function Apps, ADLS, Azure SQL), AWS (Lambda,Glue,S3), Databricks analytical platforms/tools, Snowflake Cloud Datawarehouse. Additional Information Experience working in cloud Data warehouses like Redshift or Synapse Certification in any one of the following or equivalent AWS- AWS certified data Analytics- Speciality Azure- Microsoft certified Azure Data Scientist Associate Snowflake- Snowpro core- Data Engineer Databricks Data Engineering Qualification Experience:3.5 -5 years of experience is required

Posted 2 weeks ago

Apply

2.0 - 7.0 years

0 - 1 Lacs

Pune, Chennai, Bengaluru

Hybrid

Hello Connections , Exciting Opportunity Alert !! We're on the hunt for passionate individuals to join our dynamic team as Data Engineer Job Profile : Data Engineers Experience : Minimum 5 to Maximum 8 Yrs of exp Location : Chennai / Bangalore / Pune / Mumbai / Hyderabad Mandatory Skills : Big Data | Hadoop | SCALA | spark | sparkSql | Hive Qualification : B.TECH / B.E / MCA / Computer Science Background - Any Specification How to Apply? Send your CV to: sipriyar@sightspectrum.in Contact Number - 6383476138 Don't miss out on this amazing opportunity to accelerate your professional career! #bigdata #dataengineer #hadoop #spark #python #hive #pysaprk

Posted 2 weeks ago

Apply

4.0 - 6.0 years

6 - 10 Lacs

Gurugram

Work from Office

Role Description As a Senior Cloud Data Platform (AWS) Specialist at Incedo, you will be responsible for designing, deploying and maintaining cloud-based data platforms on the AWS platform. You will work with data engineers, data scientists and business analysts to understand business requirements and design scalable, reliable and cost-effective solutions that meet those requirements. Roles & Responsibilities: Designing, developing and deploying cloud-based data platforms using Amazon Web Services (AWS) Integrating and processing large amounts of structured and unstructured data from various sources Implementing and optimizing ETL processes and data pipelines Developing and maintaining security and access controls Collaborating with other teams to ensure the consistency and integrity of data Troubleshooting and resolving data platform issues Technical Skills Skills Requirements: In-depth knowledge of AWS services and tools such as AWS Glue, AWS Redshift, and AWS Lambda Experience in building scalable and reliable data pipelines using AWS services, Apache Spark, and related big data technologies Familiarity with cloud-based infrastructure and deployment, specifically on AWS Strong knowledge of programming languages such as Python, Java, and SQL Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Provide leadership, guidance, and support to team members, ensuring the successful completion of tasks, and promoting a positive work environment that fosters collaboration and productivity, taking responsibility of the whole team. Nice-to-have skills Qualifications 4-6 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred

Posted 2 weeks ago

Apply

4.0 - 6.0 years

6 - 10 Lacs

Gurugram

Work from Office

Role Description As a Senior Cloud Data Platform (AWS) Specialist at Incedo, you will be responsible for designing, deploying and maintaining cloud-based data platforms on the AWS platform. You will work with data engineers, data scientists and business analysts to understand business requirements and design scalable, reliable and cost-effective solutions that meet those requirements. Roles & Responsibilities: Designing, developing and deploying cloud-based data platforms using Amazon Web Services (AWS) Integrating and processing large amounts of structured and unstructured data from various sources Implementing and optimizing ETL processes and data pipelines Developing and maintaining security and access controls Collaborating with other teams to ensure the consistency and integrity of data Troubleshooting and resolving data platform issues Technical Skills Skills Requirements: In-depth knowledge of AWS services and tools such as AWS Glue, AWS Redshift, and AWS Lambda Experience in building scalable and reliable data pipelines using AWS services, Apache Spark, and related big data technologies Familiarity with cloud-based infrastructure and deployment, specifically on AWS Strong knowledge of programming languages such as Python, Java, and SQL Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Provide leadership, guidance, and support to team members, ensuring the successful completion of tasks, and promoting a positive work environment that fosters collaboration and productivity, taking responsibility of the whole team. Nice-to-have skills Qualifications Qualifications 4-6 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred

Posted 2 weeks ago

Apply

4.0 - 7.0 years

7 - 17 Lacs

Hyderabad

Work from Office

Bigdata And Hadoop Developer Location : Hyderabad Experience : 4-7 Years Hive/Oracle/MySQL, Data Architecture, Modelling (Conceptual/Logical/Design/ER model) Responsibility of / Expectations from the Role** Development, support, and maintenance of the infrastructure platform and application lifecycle. Design, development and implementation of automation innovations. Development of automated testing scripts. Contribution to all phases of the application lifecycle requirements, development, testing, implementation, and support. Responding and providing guidance to customers of the Big Data platform Defining and implementing integration points with existing technology systems Researching and remaining current on big data technology and industry trends and innovations.

Posted 2 weeks ago

Apply

12.0 - 15.0 years

70 - 150 Lacs

Bengaluru

Work from Office

Staff Data Engineer Experience: 12 - 15 Years Exp. Salary : Competitive Preferred Notice Period : Within 30 Days Shift : 10:00AM to 7:00PM IST Opportunity Type: Onsite (Bengaluru) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Airflow, Hadoop, Kafka, Python, Spark, SQL, ETL onequince.com (One of Uplers' Clients) is Looking for: Staff Data Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description Year of Experience: 12 - 15 years. Responsibilities Manage and grow a cross-functional data-engineering team responsible for both Data Products & Platform Mentor a group of highly skilled junior & senior engineers by providing technical management, guidance, coaching, best practices and principles Active participation in the design, development, delivery, and maintenance of data products & platform Responsible for OKR planning, resource planning, execution, and quality of the data products & tools delivered by the group Lead the group to build sophisticated products using cutting-edge cloud native technology stack. The products will be used by various internal customers like Data Analysts, ML Engineers & Data Scientists, Management & even external stakeholders Work closely with various business stakeholders like Product Management Team, and other Engineering teams to drive the execution of multiple business strategies and technologies Act as a point of contact for TCO (total cost of ownership) for Data Platform(Ingestion, Processing, Extraction & Governance) Manage and drive production defects & stability improvements to resolution Ensure operational efficiency and actively participate in organizational initiatives with the objective of ensuring the highest customer value Tailor processes to help manage time-sensitive issues and bring them to appropriate closure Must Have: First hand exposure in managing large scale data ingestion, processing, extraction & governance processes. Experience in Big Data technologies(e.g. Apache Hadoop, Spark, Hive, Presto) Experience with Message Queues(e.g. Apache Kafka, Kinesis, RabbitMQ) Experience with Stream Processing technologies (e.g. Spark Streaming, Flink) Proficiency in at least one of the following programming languages - Python, Java or Scala. Experience in building Highly Available, Fault Tolerant REST services preferably for data ingestion or serving. Good understanding of traditional data-warehousing fundamentals. Good exposure to SQL (T-SQL/PL-SQL/SPARK-SQL/HIVE-QL). Experience with integrating data across multiple data sources. Good understanding of distributed computing principles. Strong analytical/quantitative skills and ability to connect Data with business outcomes. Good To Have: Experience with MPP data warehouses (e.g. Snowflake, Redshift). Experience with any NoSQL storage(e.g. Redis, DynamoDB, Memcache). How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Quince is an affordable luxury brand that sells high-quality fashion and home goods at radically low prices direct from the factory floor. The company has pioneered a manufacturer-to-consumer (M2C) retail model in which factories produce inventory on a near just-in-time basis and ship their goods directly to consumers' doorsteps, cutting out financial and environmental waste. Quince is headquartered in San Francisco, CA and partners with more than 50 top manufacturers around the world. Most recently, Quince completed a $77 million Series B upround raise. The investment was led by Wellington Management with participation from GGV Capital, and continuing participation from Basis Set Ventures, Insight Partners, Lugard Road, and 8VC. About Uplers: Uplers is the #1 hiring platform for SaaS companies, designed to help you hire top product and engineering talent quickly and efficiently. Our end-to-end AI-powered platform combines artificial intelligence with human expertise to connect you with the best engineering talent from India. With over 1M deeply vetted professionals, Uplers streamlines the hiring process, reducing lengthy screening times and ensuring you find the perfect fit. Companies like GitLab, Twilio, TripAdvisor, and AirBnB trust Uplers to scale their tech and digital teams effectively and cost-efficiently. Experience a simpler, faster, and more reliable hiring process with Uplers today. Principal Data Engineer - Big Data & Cloud

Posted 2 weeks ago

Apply

12.0 - 15.0 years

70 - 150 Lacs

Bengaluru

Work from Office

Principal Data Engineer - Big Data & Cloud Experience: 12 - 15 Years Exp. Salary : Competitive Preferred Notice Period : Within 30 Days Shift : 10:00AM to 7:00PM IST Opportunity Type: Onsite (Bengaluru) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Airflow, Hadoop, Kafka, Python, Spark, SQL, ETL onequince.com (One of Uplers' Clients) is Looking for: Principal Data Engineer - Big Data & Cloud who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description Year of Experience: 12 - 15 years. Responsibilities Manage and grow a cross-functional data-engineering team responsible for both Data Products & Platform Mentor a group of highly skilled junior & senior engineers by providing technical management, guidance, coaching, best practices and principles Active participation in the design, development, delivery, and maintenance of data products & platform Responsible for OKR planning, resource planning, execution, and quality of the data products & tools delivered by the group Lead the group to build sophisticated products using cutting-edge cloud native technology stack. The products will be used by various internal customers like Data Analysts, ML Engineers & Data Scientists, Management & even external stakeholders Work closely with various business stakeholders like Product Management Team, and other Engineering teams to drive the execution of multiple business strategies and technologies Act as a point of contact for TCO (total cost of ownership) for Data Platform(Ingestion, Processing, Extraction & Governance) Manage and drive production defects & stability improvements to resolution Ensure operational efficiency and actively participate in organizational initiatives with the objective of ensuring the highest customer value Tailor processes to help manage time-sensitive issues and bring them to appropriate closure Must Have: First hand exposure in managing large scale data ingestion, processing, extraction & governance processes. Experience in Big Data technologies(e.g. Apache Hadoop, Spark, Hive, Presto) Experience with Message Queues(e.g. Apache Kafka, Kinesis, RabbitMQ) Experience with Stream Processing technologies (e.g. Spark Streaming, Flink) Proficiency in at least one of the following programming languages - Python, Java or Scala. Experience in building Highly Available, Fault Tolerant REST services preferably for data ingestion or serving. Good understanding of traditional data-warehousing fundamentals. Good exposure to SQL (T-SQL/PL-SQL/SPARK-SQL/HIVE-QL). Experience with integrating data across multiple data sources. Good understanding of distributed computing principles. Strong analytical/quantitative skills and ability to connect Data with business outcomes. Good To Have: Experience with MPP data warehouses (e.g. Snowflake, Redshift). Experience with any NoSQL storage(e.g. Redis, DynamoDB, Memcache). How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Quince is an affordable luxury brand that sells high-quality fashion and home goods at radically low prices direct from the factory floor. The company has pioneered a manufacturer-to-consumer (M2C) retail model in which factories produce inventory on a near just-in-time basis and ship their goods directly to consumers' doorsteps, cutting out financial and environmental waste. Quince is headquartered in San Francisco, CA and partners with more than 50 top manufacturers around the world. Most recently, Quince completed a $77 million Series B upround raise. The investment was led by Wellington Management with participation from GGV Capital, and continuing participation from Basis Set Ventures, Insight Partners, Lugard Road, and 8VC. About Uplers: Uplers is the #1 hiring platform for SaaS companies, designed to help you hire top product and engineering talent quickly and efficiently. Our end-to-end AI-powered platform combines artificial intelligence with human expertise to connect you with the best engineering talent from India. With over 1M deeply vetted professionals, Uplers streamlines the hiring process, reducing lengthy screening times and ensuring you find the perfect fit. Companies like GitLab, Twilio, TripAdvisor, and AirBnB trust Uplers to scale their tech and digital teams effectively and cost-efficiently. Experience a simpler, faster, and more reliable hiring process with Uplers today. Principal Data Engineer - Big Data & Cloud

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

ahmedabad, gujarat

On-site

As a Business Analyst (Data Steward) at infoAnalytica Consulting, you will play a crucial role in managing data as a corporate asset and contributing to the data services business unit that serves global customers. Your responsibilities will include identifying and leveraging new, existing, and legacy data assets to increase efficiency and re-use, overseeing data strategy, data warehousing, and database management, and ensuring data quality and value creation for customers. You will be expected to act as a thought leader in defining data acquisition strategies and roadmaps, collaborating with tech teams to organize data access and knowledge repositories, and working closely with internal and external stakeholders to address data needs. Additionally, you will be responsible for building effective MIS systems for data optimization and mentoring key personnel within the data department to drive growth and efficiency. To excel in this role, you must possess exceptional communication skills to understand and convey customer requirements effectively, have at least 3 years of experience in data-heavy roles or demand generation, and demonstrate a strong technology insight to identify and leverage the right tools for data projects. Self-service data preparation mechanisms understanding and interpersonal skills are also essential qualities for success in this collaborative and strategic position. Overall, as a Business Analyst (Data Steward) at infoAnalytica, you will have the opportunity to contribute to a dynamic and inclusive work environment that values ownership, innovation, transparency, and results-driven mindset. Join us in our mission to deliver exceptional data services and make a meaningful impact in the world of marketing research and demand generation.,

Posted 2 weeks ago

Apply

10.0 - 14.0 years

0 Lacs

pune, maharashtra

On-site

As a member of the Finance, Markets and Credit Risk Technology team at Citi, you will play a crucial role in enabling the bank to achieve its day-to-day and long-term growth goals. Your responsibilities will include managing technology standards across functional areas, setting goals, contributing to IT project leadership, and ensuring the team possesses the required skill sets. You will be part of a strategic team developing, enhancing, supporting, and maintaining solutions for Finance Technology. Additionally, you will participate in technical discussions, brainstorming sessions, and gain exposure to Wholesale and Retail businesses across data, risk, and finance. In this role, you will work across diverse Finance platforms and have the opportunity to be part of re-architecture and re-platforming initiatives on low code automation platforms. You will also be responsible for managing the execution of IT strategy and roadmap for the assigned technology area, training and coaching team members, and making evaluative judgments based on information analysis in complicated situations. Furthermore, you will negotiate with senior leaders across functions and communicate with external parties effectively. To qualify for this position, you should have 10-12 years of relevant experience in Apps Development or systems analysis role, extensive experience in system analysis and programming of software applications, and proven engineering experience in building robust, scalable, and maintainable applications in the Capital Markets Technology industry. You should also have 5+ years of experience in a technical leadership role, with experience leading global technology teams. In terms of technical skills, you should have 10+ years of relevant experience in Java/JVM Based Language like Kotlin, Microservices, and strong experience in API Development and its ecosystems. Additionally, hands-on experience with Java, Spring, Spring Cloud, Spring Data JPA, Spring Boot Microservices, Junit, Git, Jenkins, Maven, and troubleshooting skills are required. Proficiency in SQL Databases, MongoDB, Oracle, Big Data, and understanding Middleware like Tibco RV, EMS, Solace, etc., will be beneficial for this role. A Bachelors degree or equivalent experience is required, with a Masters degree preferred. If you are an individual with a disability and require a reasonable accommodation to use the search tools or apply for a career opportunity, please review Accessibility at Citi.,

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies