Jobs
Interviews

546 Hbase Jobs - Page 3

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 12.0 years

0 Lacs

karnataka

On-site

As a Big Data Lead with 7-12 years of experience, you will be responsible for software development using multiple computing languages. Your role will involve working on distributed data processing systems and applications, specifically in Business Intelligence/Data Warehouse (BIDW) programs. Additionally, you should have previous experience in development through testing, preferably on the J2EE stack. Your knowledge and understanding of best practices and concepts in Data Warehouse Applications will be crucial to your success in this role. You should possess a strong foundation in distributed systems and computing systems, with hands-on engineering skills. Hands-on experience with technologies such as Spark, Scala, Kafka, Hadoop, Hbase, Pig, and Hive is required. An understanding of NoSQL data stores, data modeling, and data management is essential for this position. Good interpersonal communication skills, along with excellent oral and written communication and analytical skills, are necessary for effective collaboration within the team. Experience with Data Lake implementation as an alternative to Data Warehouse is preferred. You should have hands-on experience with Data frames using Spark SQL and proficiency in SQL. A minimum of 2 end-to-end implementations in either Data Warehouse or Data Lake is required for this role as a Big Data Lead.,

Posted 1 week ago

Apply

7.0 - 9.0 years

8 - 14 Lacs

Agra

Work from Office

Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards

Posted 1 week ago

Apply

7.0 - 9.0 years

8 - 14 Lacs

Vadodara

Work from Office

Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards

Posted 1 week ago

Apply

7.0 - 9.0 years

8 - 14 Lacs

Faridabad

Work from Office

Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards

Posted 1 week ago

Apply

7.0 - 9.0 years

8 - 14 Lacs

Jaipur

Work from Office

Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards

Posted 1 week ago

Apply

7.0 - 9.0 years

8 - 14 Lacs

Nagpur

Work from Office

Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards

Posted 1 week ago

Apply

8.0 - 13.0 years

20 - 25 Lacs

Ahmedabad

Work from Office

Position Overview This role is responsible for defining and delivering ZURU s next-generation data architecture built for global scalability, real-time analytics, and AI enablement. You will lead the unification of fragmented data systems into a cohesive, cloud-native platform that supports advanced business intelligence and decision-making. Sitting at the intersection of data strategy, engineering, and commercial enablement, this role demands both deep technical acumen and strong cross-functional influence. You will drive the vision and implementation of robust data infrastructure, champion governance standards, and embed a culture of data excellence across the organisation. Position Impact In the first six months, the Head of Data Architecture will gain deep understanding of ZURU s operating model, technology stack, and data fragmentation challenges. You ll conduct a comprehensive review of current architecture, identifying performance gaps, security concerns, and integration challenges across systems like SAP, Odoo, POS, and marketing platforms. By month twelve, you ll have delivered a fully aligned architecture roadmap implementing cloud-native infrastructure, data governance standards, and scalable models and pipelines to support AI and analytics. You will have stood up a Centre of Excellence for Data, formalised global data team structures, and established yourself as a trusted partner to senior leadership. What are you Going to do Lead Global Data Architecture: Own the design, evolution, and delivery of ZURU s enterprise data architecture across cloud and hybrid environments. Consolidate Core Systems: Unify data sources across SAP, Odoo, POS, IoT, and media into a single analytical platform optimised for business value. Build Scalable Infrastructure: Architect cloud-native solutions that support both batch and streaming data workflows using tools like Databricks, Kafka, and Snowflake. Implement Governance Frameworks: Define and enforce enterprise-wide data standards for access control, privacy, quality, security, and lineage. Enable Metadata Cataloguing: Deploy metadata management and cataloguing tools to enhance data discoverability and self-service analytics. Operationalise AI/ML Pipelines: Lead data architecture that supports AI/ML initiatives, including demand forecasting, pricing models, and personalisation. Partner Across Functions: Translate business needs into data architecture solutions by collaborating with leaders in Marketing, Finance, Supply Chain, RD, and Technology. Optimize Cloud Cost Performance: Roll out compute and storage systems that balance cost efficiency, performance, and observability across platforms. Establish Data Leadership: Build and mentor a high-performing data team across India and NZ, and drive alignment across engineering, analytics, and governance. Vendor and Tool Strategy: Evaluate external tools and partners to ensure the data ecosystem is future-ready, scalable, and cost-effective. What are we Looking for 8+ years of experience in data architecture, with 3+ years in a senior or leadership role across cloud or hybrid environments Proven ability to design and scale large data platforms supporting analytics, real-time reporting, and AI/ML use cases Hands-on expertise with ingestion, transformation, and orchestration pipelines (e.g. Kafka, Airflow, DBT, Fivetran) Strong knowledge of ERP data models, especially SAP and Odoo Experience with data governance, compliance (GDPR/CCPA) , metadata cataloguing, and security practices Familiarity with distributed systems and streaming frameworks like Spark or Flink Strong stakeholder management and communication skills, with the ability to influence both technical and business teams Experience building and leading cross-regional data teams Tools Technologies Cloud Platforms: AWS (S3, EMR, Kinesis, Glue), Azure (Synapse, ADLS), GCP Big Data: Hadoop, Apache Spark, Apache Flink Streaming: Kafka, Kinesis, Pub/Sub Orchestration: Airflow, Prefect, Dagster, DBT Warehousing: Snowflake, Redshift, BigQuery, Databricks Delta NoSQL: Cassandra, DynamoDB, HBase, Redis Query Engines: Presto/Trino, Athena IaC CI/CD: Terraform, GitLab Actions Monitoring: Prometheus, Grafana, ELK, OpenTelemetry Security/Governance: IAM, TLS, KMS, Amundsen, DataHub, Collibra, DBT for lineage What do we Offer Competitive compensation 5 Working Days with Flexible Working Hours Medical Insurance for self family Training skill development programs Work with the Global team, Make the most of the diverse knowledge Several discussions over Multiple Pizza Parties

Posted 1 week ago

Apply

6.0 - 9.0 years

5 - 10 Lacs

Noida

Work from Office

Relevant experience and skills: Must haves: At least 6-9 years of work experience in US and overseas payroll. Understanding of customer invoicing and timesheet management Quick learner & presentation skill Strong sense of urgency and results-orientation MS Office Advanced Excel and good Power point Acquaint with different client portals like wand, Fieldglass, Beeline, Coupa, Ariba Good to have: Experience of Background in IT staffing business ERP working knowledge Quick Book

Posted 1 week ago

Apply

5.0 - 9.0 years

12 - 17 Lacs

Noida

Work from Office

Spark/PySpark Technical hands on data processing Table designing knowledge using Hive - similar to RDBMS knowledge Database SQL knowledge for retrieval of data - transformation queries such as joins (full, left, right), ranking, group by Good Communication skills. Additional skills - GitHub, Jenkins, shell scripting would be added advantage Mandatory Competencies Big Data - Big Data - Pyspark Big Data - Big Data - SPARK Big Data - Big Data - Hadoop Big Data - Big Data - HIVE DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Jenkins Beh - Communication and collaboration Database - Database Programming - SQL DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - GitLab,Github, Bitbucket DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Basic Bash/Shell script writing

Posted 1 week ago

Apply

5.0 - 10.0 years

8 - 13 Lacs

Bengaluru

Work from Office

Lead Software Engineer Backend Were seeking a Lead Software Engineer to join one of our Data Layer teams. As the name implies, the Data Layer is at the core of all things data at Zeta. Our responsibilities include: Developing and maintaining the Zeta Identity Graph platform, which collects billions of behavioural, demographic, environmental, and transactional signals to power people-based marketing. Ingesting vast amounts of identity and event data from our customers and partners. Facilitating data transfers across systems. Ensuring the integrity and health of our datasets. And much more. As a member of this team, the data engineer will be responsible for designing and expanding our existing data infrastructure, enabling easy access to data, supporting complex data analyses, and automating optimization workflows for business and marketing operations. Essential Responsibilities: As a Lead Software Engineer, your responsibilities will include: Building, refining, tuning, and maintaining our real-time and batch data infrastructure Daily use technologies such as Python, Spark, Airflow, Snowflake, Hive, Scylla, Django, FastAPI, etc. Maintaining data quality and accuracy across production data systems Working with Data Engineers to optimize data models and workflows Working with Data Analysts to develop ETL processes for analysis and reporting Working with Product Managers to design and build data products Working with our DevOps team to scale and optimize our data infrastructure Participate in architecture discussions, influence the road map, take ownership and responsibility over new projects Participating in on-call rotation in their respective time zones (be available by phone or email in case something goes wrong) Desired Characteristics: Minimum 5 years of software engineering experience. Proven long term experience and enthusiasm for distributed data processing at scale, eagerness to learn new things. Expertise in designing and architecting distributed low latency and scalable solutions in either cloud and onpremises environment. Exposure to the whole software development lifecycle from inception to production and monitoring Fluency in Python or solid experience in Scala, Java Proficient with relational databases and Advanced SQL Expert in usage of services like Spark and Hive Experience with web frameworks such as Flask, Djang Experience in adequate usage of any scheduler such as Apache Airflow, Apache Luigi, Chronos etc. Experience in adequate usage of cloud services (AWS) at scale Experience in agile software development processes Excellent interpersonal and communication skills Nice to have: Experience with large scale / multi-tenant distributed systems Experience with columnar / NoSQL databases Vertica, Snowflake, HBase, Scylla, Couchbase Experience in real team streaming frameworks Flink, Storm Experience in open table formats such as Iceberg, Hudi or Deltalake

Posted 1 week ago

Apply

4.0 - 8.0 years

8 - 12 Lacs

Pune

Work from Office

Piller Soft Technology is looking for Lead Data Engineer to join our dynamic team and embark on a rewarding career journey Designing and developing data pipelines: Lead data engineers are responsible for designing and developing data pipelines that move data from various sources to storage and processing systems. Building and maintaining data infrastructure: Lead data engineers are responsible for building and maintaining data infrastructure, such as data warehouses, data lakes, and data marts. Ensuring data quality and integrity: Lead data engineers are responsible for ensuring data quality and integrity, by setting up data validation processes and implementing data quality checks. Managing data storage and retrieval: Lead data engineers are responsible for managing data storage and retrieval, by designing and implementing data storage systems, such as NoSQL databases or Hadoop clusters. Developing and maintaining data models: Lead data engineers are responsible for developing and maintaining data models, such as data dictionaries and entity-relationship diagrams, to ensure consistency in data architecture. Managing data security and privacy: Lead data engineers are responsible for managing data security and privacy, by implementing security measures, such as access controls and encryption, to protect sensitive data. Leading and managing a team: Lead data engineers may be responsible for leading and managing a team of data engineers, providing guidance and support for their work.

Posted 1 week ago

Apply

4.0 - 9.0 years

9 - 14 Lacs

Bengaluru

Work from Office

Minimum of 4+ years of software development experience with demonstrated expertise in standard development best practice methodologies SKILLS REQUIRED: Spark, Scala, Python, HDFS, Hive, , Scheduler ( Ozzie, Airflow),Kafka Spark/Scala SQL RDBMS DOCKER KUBERNETES RABBITMQ/KAFKA MONITORING TOOLS - SPLUNK OR ELK Profile required Integrate test frameworks in development process Refactor existing solutions to make it reusable and scalable - Work with operations to get the solutions deployed Take ownership of production deployment of code Collaborating with and/or lead cross functional teams, build and launch applications and data platforms at scale, either for revenue generating or operational purposes *Come up with Coding and Design best practices *Thrive in self-motivated internal-innovation driven environment Adapting fast to new application knowledge and changes

Posted 1 week ago

Apply

4.0 - 7.0 years

13 - 18 Lacs

Bengaluru

Work from Office

Design, development and testing of components / modules in TOP (Trade Open Platform) involving Spark, Java, Hive and related big-data technologies in a datalake architecture Contribute to the design, development and deployment of new features new components in Azure public cloud Contribute to the evolution of REST APIs in TOP enhancement, development and testing of new APIs Ensure the processes in TOP provide an optimal performance and assist in performance tuning and optimization Release Deployment Deploy using CD/CI practices and tools in various environments development, UAT and production and follow production processes. Ensure Craftsmanship practices are followed Follow Agile at Scale process in terms of participation in PI Planning and follow-up, Sprint planning, Back-log maintenance in Jira. Organize training sessions on the core platform and related technologies for the Tribe / Business line to ensure the platform evolution is continuously updated to relevant stakeholders

Posted 1 week ago

Apply

3.0 - 5.0 years

4 - 8 Lacs

Noida

Work from Office

Relevant experience and skills: Must haves: At least 3-5 years of work experience in US and overseas payroll. Understanding of customer invoicing and timesheet management Quick learner & presentation skill Strong sense of urgency and results-orientation MS Office Advanced Excel and good Power point Acquaint with different client portals like wand, Fieldglass, Beeline, Coupa, Ariba Good to have: Experience of Background in IT staffing business ERP working knowledge Quick Book

Posted 1 week ago

Apply

5.0 - 6.0 years

9 - 14 Lacs

Noida

Work from Office

Solid understanding of object-oriented programming and design patterns. 5 to 6 Years of strong experience with bigdata. Comfortable working with large data volumes and able to demonstrate a firm understanding of logical data structures and analysis techniques. Experience in Big data technologies like HDFS, Hive, HBase, Apache Spark, Pyspark & Kafka Proficient in code versioning tools, such as Git, BitBucket, and Jira Strong systems analysis, design and architecture fundamentals, Unit Testing, and other SDLC activities Experience in working on Linux shell scripting. Demonstrated analytical and problem-solving skills. Excellent troubleshooting and debugging skills. Strong communication and aptitude. Ability to write reliable, manageable, and high-performance code. Good knowledge of database principles, practices, and structures, including SQL development experience, preferably with Oracle. Understanding fundamental design principles behind a scalable application. Basic Unix OS and scripting knowledge. Good to have: Financial markets background is preferable but is not a must. Experience in Jenkins, Scala, Autosys. Familiarity with build tools such as Maven and continuous integration. Candidates with working knowledge of Docker Kubernetes OpenShift Mesos is a plus. Have basic experience in Data Preparation Tools Experience with CI/CD build pipelines. Mandatory Competencies Big Data - Big Data - HDFS Big Data - Big Data - HIVE Big Data - Big Data - Hadoop Big Data - Big Data - Pyspark Beh - Communication Data Science and Machine Learning - Data Science and Machine Learning - Apache Spark

Posted 1 week ago

Apply

7.0 - 9.0 years

6 - 10 Lacs

Chennai

Work from Office

As a Technical Lead - Cloud Data Platform (AWS) at Incedo, you will be responsible for designing, deploying and maintaining cloud-based data platforms on the AWS platform. You will work with data engineers, data scientists and business analysts to understand business requirements and design scalable, reliable and cost-effective solutions that meet those requirements. Roles & Responsibilities: Designing, developing and deploying cloud-based data platforms using Amazon Web Services (AWS) Integrating and processing large amounts of structured and unstructured data from various sources Implementing and optimizing ETL processes and data pipelines Developing and maintaining security and access controls Collaborating with other teams to ensure the consistency and integrity of data Troubleshooting and resolving data platform issues Technical Skills Skills Requirements: In-depth knowledge of AWS services and tools such as AWS Glue, AWS Redshift, and AWS Lambda Experience in building scalable and reliable data pipelines using AWS services, Apache Spark, and related big data technologies Familiarity with cloud-based infrastructure and deployment, specifically on AWS Strong knowledge of programming languages such as Python, Java, and SQL Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Should be open to new ideas and be willing to learn and develop new skills. Should also be able to work well under pressure and manage multiple tasks and priorities. Nice-to-have skills Qualifications Qualifications 7-9 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred

Posted 1 week ago

Apply

4.0 - 7.0 years

15 - 18 Lacs

Bengaluru

Work from Office

Role Title - Associate Solution Architect - Telecom BSS/OSS Job Location - Bangalore Work mode - Work from office Experince - 4-7yrs Why This Role Matters You are the critical link between our customers' evolving needs and our cutting-edge solutions. You will speak both business and technology fluently, transforming complex requirements into smart, scalable solutions. If you are passionate about solving real-world problems, collaborating with high-performing teams, and making an immediate impact, this is the role for you. What You Will Do Be the go-to technical expert in customer conversations, understanding their needs, sketching high-level designs, and defining solution scope and technical direction. Lead solution workshops to translate customer ideas into clear, high-level technical blueprints. Deconstruct business features into precise technical requirements for our delivery teams. Collaborate closely with Project Managers, Consultants, and Engineers to ensure solutions are aligned, practical, and exceptional. Own and drive the solution journey from concept to delivery, ensuring clarity and confidence at every stage. Develop custom solutions when out-of-the-box features don't meet unique client needs. Troubleshoot, guide, and mentor teams through technical challenges. Ensure best practices are applied to deployments, meeting each customer's specific requirements. What You Bring 4 - 7 years of hands-on experience in Telecom BSS/OSS implementations or solutioning, with a proven track record on live customer projects. Strong domain knowledge and the ability to seamlessly connect technical details with business objectives. Clear, confident communicator with top-notch presentation and client engagement abilities. Comfortable navigating ambiguity and adept at mapping complex business processes to product capabilities. Deep understanding of software development life cycles and enterprise architecture. Solid grasp of: Unix & Shell scripting RDBMS (Oracle, PostgreSQL) Big Data technologies (e.g., Hadoop, HBase) Bonus: You enjoy whiteboarding ideas as much as refining them in detailed requirements documentation. What You Will Love Collaborate with a passionate, cross-functional team that truly values your insights. Work on high-impact telecom projects with a global client base. A role designed for growth, whether you are deepening your architectural expertise or stepping into leadership.

Posted 1 week ago

Apply

4.0 - 6.0 years

6 - 10 Lacs

Gurugram

Work from Office

Role Description As a Senior Cloud Data Platform (AWS) Specialist at Incedo, you will be responsible for designing, deploying and maintaining cloud-based data platforms on the AWS platform. You will work with data engineers, data scientists and business analysts to understand business requirements and design scalable, reliable and cost-effective solutions that meet those requirements. Roles & Responsibilities: Designing, developing and deploying cloud-based data platforms using Amazon Web Services (AWS) Integrating and processing large amounts of structured and unstructured data from various sources Implementing and optimizing ETL processes and data pipelines Developing and maintaining security and access controls Collaborating with other teams to ensure the consistency and integrity of data Troubleshooting and resolving data platform issues Technical Skills Skills Requirements: In-depth knowledge of AWS services and tools such as AWS Glue, AWS Redshift, and AWS Lambda Experience in building scalable and reliable data pipelines using AWS services, Apache Spark, and related big data technologies Familiarity with cloud-based infrastructure and deployment, specifically on AWS Strong knowledge of programming languages such as Python, Java, and SQL Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Provide leadership, guidance, and support to team members, ensuring the successful completion of tasks, and promoting a positive work environment that fosters collaboration and productivity, taking responsibility of the whole team. Nice-to-have skills Qualifications 4-6 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred

Posted 1 week ago

Apply

4.0 - 6.0 years

6 - 10 Lacs

Gurugram

Work from Office

Role Description As a Senior Cloud Data Platform (AWS) Specialist at Incedo, you will be responsible for designing, deploying and maintaining cloud-based data platforms on the AWS platform. You will work with data engineers, data scientists and business analysts to understand business requirements and design scalable, reliable and cost-effective solutions that meet those requirements. Roles & Responsibilities: Designing, developing and deploying cloud-based data platforms using Amazon Web Services (AWS) Integrating and processing large amounts of structured and unstructured data from various sources Implementing and optimizing ETL processes and data pipelines Developing and maintaining security and access controls Collaborating with other teams to ensure the consistency and integrity of data Troubleshooting and resolving data platform issues Technical Skills Skills Requirements: In-depth knowledge of AWS services and tools such as AWS Glue, AWS Redshift, and AWS Lambda Experience in building scalable and reliable data pipelines using AWS services, Apache Spark, and related big data technologies Familiarity with cloud-based infrastructure and deployment, specifically on AWS Strong knowledge of programming languages such as Python, Java, and SQL Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Provide leadership, guidance, and support to team members, ensuring the successful completion of tasks, and promoting a positive work environment that fosters collaboration and productivity, taking responsibility of the whole team. Nice-to-have skills Qualifications Qualifications 4-6 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred

Posted 1 week ago

Apply

10.0 - 14.0 years

0 Lacs

haryana

On-site

The role is seeking a highly experienced Big Data Engineer with a strong background in Python, Java, or Scala. You will be responsible for designing, building, and maintaining scalable data pipelines and data lake architectures. The ideal candidate should have a proven ability to manage and deliver on complex data engineering projects. This position offers an excellent opportunity to work on scalable data solutions in a collaborative environment. Key responsibilities include developing robust data engineering solutions, working with various technologies such as Apache Spark, Hadoop, Hive, HBase, Kafka, Airflow, and Oozie. Collaboration with cross-functional teams is essential to ensure high data quality and governance. You will also be expected to leverage cloud platforms like AWS, Azure, or GCP for data processing and infrastructure, as well as manage and optimize data warehouses, data lakes, and RDBMS like PostgreSQL or SQL Server. Required skills for this position include 10+ years of experience in Information Technology, particularly in the Big Data ecosystem. Strong programming skills in Python, Java, or Scala are a must, along with deep knowledge and hands-on experience with Apache Spark, Hadoop, Hive, HBase, Kafka, Airflow, and Oozie. Experience with cloud environments (AWS, Azure, or GCP) is highly desirable, as well as a good understanding of data modeling, data architecture, and data governance principles. Familiarity with version control (Git), Docker, and orchestration tools like Kubernetes is considered a plus. Preferred qualifications include experience in real-time data processing using Kafka or Spark Streaming, exposure to NoSQL databases such as MongoDB or Cassandra, and certification in AWS Big Data or equivalent, which is a strong advantage. This position is full-time and open only to women candidates.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

maharashtra

On-site

Job Description: We are looking for a skilled PySpark Developer having 4-5 or 2-3 years of experience to join our team. As a PySpark Developer, you will be responsible for developing and maintaining data processing pipelines using PySpark, Apache Spark's Python API. You will work closely with data engineers, data scientists, and other stakeholders to design and implement scalable and efficient data processing solutions. Bachelor's or Master's degree in Computer Science, Data Science, or a related field is required. The ideal candidate should have strong expertise in the Big Data ecosystem including Spark, Hive, Sqoop, HDFS, Map Reduce, Oozie, Yarn, HBase, Nifi. The candidate should be below 35 years of age and have experience in designing, developing, and maintaining PySpark data processing pipelines to process large volumes of structured and unstructured data. Additionally, the candidate should collaborate with data engineers and data scientists to understand data requirements and design efficient data models and transformations. Optimizing and tuning PySpark jobs for performance, scalability, and reliability is a key responsibility. Implementing data quality checks, error handling, and monitoring mechanisms to ensure data accuracy and pipeline robustness is crucial. The candidate should also develop and maintain documentation for PySpark code, data pipelines, and data workflows. Experience in developing production-ready Spark applications using Spark RDD APIs, Data frames, Datasets, Spark SQL, and Spark Streaming is required. Strong experience of HIVE Bucketing and Partitioning, as well as writing complex hive queries using analytical functions, is essential. Knowledge in writing custom UDFs in Hive to support custom business requirements is a plus. If you meet the above qualifications and are interested in this position, please email your resume, mentioning the position applied for in the subject column at: careers@cdslindia.com.,

Posted 1 week ago

Apply

2.0 - 5.0 years

4 - 7 Lacs

Pune

Work from Office

We are seeking a highly skilled Analyst -Big Data Developer to join our dynamic team. The ideal candidate will have extensive experience with big data technologies and a strong background in developing and optimizing data integration frameworks and applications. You will be responsible for designing, implementing, and maintaining robust data solutions in a cloud environment. Key Responsibilities: Data Solution Development Design and implement batch and real-time big data integration frameworks and applications using technologies such as Hadoop, Apache Spark, and related tools. Performance Optimization Identify performance bottlenecks and apply best practices to optimize and fine-tune big data frameworks. Programming Develop and maintain code in multiple programming languages including Java, Scala, and Python. Ensure code quality and adhere to best practices. Schema Design Apply principles and best practices in schema design to various big data technologies, including Hadoop, YARN, HIVE, Kafka, Oozie, and NoSQL databases like Cassandra and HBase. Cloud Integration Work with cloud platforms, preferably GCP, to deploy and manage big data solutions. Linux Environment Utilize system tools and scripting languages to work effectively in a Linux environment and integrate with various frameworks and tools. Collaboration Collaborate with cross-functional teams to understand requirements and deliver solutions that meet business needs. Troubleshooting Diagnose and resolve issues related to big data applications and frameworks. Ensure data integrity and system reliability. Documentation Maintain comprehensive documentation of development processes, configurations, and operational procedures. Required Skills and Qualifications: Education bachelor\u0027s degree in engineering, Computer Science, or a related field, or equivalent qualification. Experience Minimum of 2 to 5 years of experience in a recognized global IT services or consulting company, with hands-on expertise in big data technologies. Big Data Technologies Over 2 years of experience with Hadoop ecosystem, Apache Spark, and associated tools. Experience with modern big data technologies and frameworks such as Spark, Impala, and Kafka. Programming Proficiency in Java, Scala, and Python with the ability to code in multiple languages. Cloud Platforms Experience with cloud platforms, preferably GCP. Linux Environment At least 2 years of experience working in a Linux environment, including system tools, scripting languages, and integration frameworks. Schema Design Extensive experience applying schema design principles and best practices to big data technologies. Hadoop Distributions Knowledge of Hadoop distributions such as EMR, Cloudera, or Hortonworks. Preferred Skills: Experience with additional big data tools and technologies. Certification in relevant big data or cloud technologies.

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

chennai, tamil nadu

On-site

The Applications Development Senior Programmer Analyst plays a crucial role in establishing and implementing new or revised application systems and programs in coordination with the Technology team. Your main objective will be to contribute to applications systems analysis and programming activities. Your responsibilities will include conducting tasks related to feasibility studies, time and cost estimates, IT planning, risk technology, applications development, model development, and establishing and implementing new or revised applications systems and programs to meet specific business needs or user areas. You will also be responsible for monitoring and controlling all phases of the development process, including analysis, design, construction, testing, and implementation, as well as providing user and operational support on applications to business users. Additionally, you will utilize your in-depth specialty knowledge of applications development to analyze complex problems/issues, evaluate business and system processes, and industry standards, and make evaluative judgments. It will be essential for you to recommend and develop security measures in post-implementation analysis of business usage to ensure successful system design and functionality. Furthermore, you will consult with users/clients and other technology groups on issues, recommend advanced programming solutions, and install and assist customer exposure systems. You will also ensure that essential procedures are followed, help define operating standards and processes, and serve as an advisor or coach to new or lower-level analysts. You should be able to operate with a limited level of direct supervision, exercise independence of judgment and autonomy, and act as a subject matter expert to senior stakeholders and/or other team members. As an Applications Development Senior Programmer Analyst, you will be expected to assess risk appropriately when making business decisions, with particular consideration for the firm's reputation and the protection of Citigroup, its clients, and assets. This includes driving compliance with applicable laws, rules, and regulations, adhering to policies, applying sound ethical judgment regarding personal behavior, conduct, and business practices, and escalating, managing, and reporting control issues with transparency. Qualifications: - 6-10 years of relevant experience - Experience in systems analysis and programming of software applications - Experience in managing and implementing successful projects - Working knowledge of consulting/project management techniques/methods - Ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements Education: - Bachelor's degree/University degree or equivalent experience In addition to the general responsibilities and qualifications mentioned above, the ideal candidate should have: - Strong programming skills in Python - Proficiency in Object-Oriented Programming and Data Structures - Good knowledge of design patterns - Experience with Python frameworks such as Flask and Django - Strong technical skills in Big Data technologies like Pyspark and the Hadoop ecosystem components (HDFS, Hbase, Hive, Pig) - Strong experience in Pyspark - Solid understanding of REST web services - Familiarity with Spark performance tuning and optimization techniques - Knowledge of databases including PL SQL, SQL, and Transact-SQL, with Oracle being a plus - Experience in processing data in various file types such as flat files, XML, Parquet, CSV, and data frames - Good exposure to UI frameworks and the ability to understand UI architecture - Proficiency in source code management tools like Git - Experience in Agile methodology - Familiarity with issue tracking tools like Jira This job description provides a high-level overview of the responsibilities and qualifications for the Applications Development Senior Programmer Analyst role. Please note that other job-related duties may be assigned as required.,

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

As a member of the Flipkart team focused on GenZ, you will be at the forefront of the company's strategic growth bet on Video & Live Commerce. The core pillars of this capability are enhancing user experience, empowering creators, and encouraging seller/brands participation. Your primary goal will be to videofy the Flipkart app across various discovery points such as homepage, S&B, and Product Page, while also creating a dedicated discovery destination where users can explore inspirational content akin to TikTok or Instagram reels. You will be instrumental in developing a next-generation Live streaming experience that supports concurrent livestreams for millions of users. Additionally, your role will involve leading the development of cutting-edge systems aimed at enhancing personalization through relevant product discovery for each user. Leveraging GenAI technology, you will drive automated quality control of Images, Videos, Creators, and content to deliver a more personalized shopping experience. Your responsibilities will include driving hyper-personalization of the user experience using Machine Learning and Data Science techniques at various stages of the funnel. By utilizing data-driven insights and a growth mindset, you will continuously strive to enhance user experience at scale, ensuring the delivery of video reels with minimal size compression and latency. From a technical perspective, you will work with a cutting-edge tech stack that includes technologies and frameworks like Kafka, Zookeeper, Apache Pulsar, Spark, Bigtable, HBase, Redis, MongoDB, Elasticsearch, Docker, Kubernetes, and various Video technologies such as OBS, RTMP, Jitsi, and Transcoder. Your role will involve collaborating with diverse stakeholders to deliver scalable and quality technology solutions, while also facilitating platform solutions that extend beyond your team to the wider ecosystem. As an Engineering Manager (EM), you will lead a team of engineers across different levels, guiding them towards realizing Flipkart's vision. You will be responsible for setting the direction and long-term vision for the team, partnering with product, business, and other stakeholders to bring this vision to life. Your role will involve providing technical leadership, creating clear career paths for team members, attracting and retaining top talent, driving strategy and vision, and fostering a strong team culture of responsiveness and agility in execution. Overall, as a key member of the Flipkart team, you will play a crucial role in driving innovation, personalization, and growth in the realm of Video & Live Commerce, while also contributing to the technical excellence and strategic direction of the organization.,

Posted 2 weeks ago

Apply

5.0 - 8.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Role Purpose The purpose of this role is to design, test and maintain software programs for operating systems or applications which needs to be deployed at a client end and ensure its meet 100% quality assurance parameters Do 1. Instrumental in understanding the requirements and design of the product/ software Develop software solutions by studying information needs, studying systems flow, data usage and work processes Investigating problem areas followed by the software development life cycle Facilitate root cause analysis of the system issues and problem statement Identify ideas to improve system performance and impact availability Analyze client requirements and convert requirements to feasible design Collaborate with functional teams or systems analysts who carry out the detailed investigation into software requirements Conferring with project managers to obtain information on software capabilities 2. Perform coding and ensure optimal software/ module development Determine operational feasibility by evaluating analysis, problem definition, requirements, software development and proposed software Develop and automate processes for software validation by setting up and designing test cases/scenarios/usage cases, and executing these cases Modifying software to fix errors, adapt it to new hardware, improve its performance, or upgrade interfaces. Analyzing information to recommend and plan the installation of new systems or modifications of an existing system Ensuring that code is error free or has no bugs and test failure Preparing reports on programming project specifications, activities and status Ensure all the codes are raised as per the norm defined for project / program / account with clear description and replication patterns Compile timely, comprehensive and accurate documentation and reports as requested Coordinating with the team on daily project status and progress and documenting it Providing feedback on usability and serviceability, trace the result to quality risk and report it to concerned stakeholders 3. Status Reporting and Customer Focus on an ongoing basis with respect to project and its execution Capturing all the requirements and clarifications from the client for better quality work Taking feedback on the regular basis to ensure smooth and on time delivery Participating in continuing education and training to remain current on best practices, learn new programming languages, and better assist other team members. Consulting with engineering staff to evaluate software-hardware interfaces and develop specifications and performance requirements Document and demonstrate solutions by developing documentation, flowcharts, layouts, diagrams, charts, code comments and clear code Documenting very necessary details and reports in a formal way for proper understanding of software from client proposal to implementation Ensure good quality of interaction with customer w.r.t. e-mail content, fault report tracking, voice calls, business etiquette etc Timely Response to customer requests and no instances of complaints either internally or externally Mandatory Skills: Scala programming. Experience: 5-8 Years.

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies