Home
Jobs
Companies
Resume

383 Hbase Jobs - Page 5

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 8.0 years

6 - 10 Lacs

Pune

Work from Office

Naukri logo

About Us: Soul AI is a pioneering company founded by IIT Bombay and IIM Ahmedabad alumni, with a strong founding team from IITs, NITs, and BITS We specialize in delivering high-quality human-curated data, AI-first scaled operations services, and more Based in SF and Hyderabad, we are a young, fast-moving team on a mission to build AI for Good, driving innovation and positive societal impact We are seeking skilled Scala Developers with a minimum of 1 year of development experience to join us as freelancers and contribute to impactful projects Key Responsibilities: Write clean, efficient code for data processing and transformation Debug and resolve technical issues Evaluate and review code to ensure quality and compliance Required Qualifications: 1+ year of Scala development experience Strong expertise in Scala programming, functional programming concepts Experience with frameworks like Akka or Spark Should be skilled in building scalable, distributed systems and big data applications Why Join Us Competitive pay (‚1000/hour) Flexible hours Remote opportunity NOTEPay will vary by project and typically is up to Rs 1000 per hour (if you work an average of 3 hours every day that could be as high as Rs 90K per month) once you clear our screening process Shape the future of AI with Soul AI!

Posted 2 weeks ago

Apply

4.0 - 8.0 years

6 - 10 Lacs

Mumbai

Work from Office

Naukri logo

About Us: Soul AI is a pioneering company founded by IIT Bombay and IIM Ahmedabad alumni, with a strong founding team from IITs, NITs, and BITS We specialize in delivering high-quality human-curated data, AI-first scaled operations services, and more Based in SF and Hyderabad, we are a young, fast-moving team on a mission to build AI for Good, driving innovation and positive societal impact We are seeking skilled Scala Developers with a minimum of 1 year of development experience to join us as freelancers and contribute to impactful projects Key Responsibilities: Write clean, efficient code for data processing and transformation Debug and resolve technical issues Evaluate and review code to ensure quality and compliance Required Qualifications: 1+ year of Scala development experience Strong expertise in Scala programming, functional programming concepts Experience with frameworks like Akka or Spark Should be skilled in building scalable, distributed systems and big data applications Why Join Us Competitive pay (‚1000/hour) Flexible hours Remote opportunity NOTEPay will vary by project and typically is up to Rs 1000 per hour (if you work an average of 3 hours every day that could be as high as Rs 90K per month) once you clear our screening process Shape the future of AI with Soul AI!

Posted 2 weeks ago

Apply

4.0 - 8.0 years

6 - 10 Lacs

Kolkata

Work from Office

Naukri logo

About Us: Soul AI is a pioneering company founded by IIT Bombay and IIM Ahmedabad alumni, with a strong founding team from IITs, NITs, and BITS We specialize in delivering high-quality human-curated data, AI-first scaled operations services, and more Based in SF and Hyderabad, we are a young, fast-moving team on a mission to build AI for Good, driving innovation and positive societal impact We are seeking skilled Scala Developers with a minimum of 1 year of development experience to join us as freelancers and contribute to impactful projects Key Responsibilities: Write clean, efficient code for data processing and transformation Debug and resolve technical issues Evaluate and review code to ensure quality and compliance Required Qualifications: 1+ year of Scala development experience Strong expertise in Scala programming, functional programming concepts Experience with frameworks like Akka or Spark Should be skilled in building scalable, distributed systems and big data applications Why Join Us Competitive pay (‚1000/hour) Flexible hours Remote opportunity NOTEPay will vary by project and typically is up to Rs 1000 per hour (if you work an average of 3 hours every day that could be as high as Rs 90K per month) once you clear our screening process Shape the future of AI with Soul AI!

Posted 2 weeks ago

Apply

4.0 - 7.0 years

6 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

What this job involves: JLL, an international real estate management company, is seeking an Data Engineer to join our JLL Technologies Team. We are seeking candidates that are self-starters to work in a diverse and fast-paced environment that can join our Enterprise Data team. We are looking for a candidate that is responsible for designing and developing of data solutions that are strategic for the business using the latest technologies Azure Databricks, Python, PySpark, SparkSQL, Azure functions, Delta Lake, Azure DevOps CI/CD. Responsibilities Design, Architect, and Develop solutions leveraging cloud big data technology to ingest, process and analyze large, disparate data sets to exceed business requirements. Design & develop data management and data persistence solutions for application use cases leveraging relational, non-relational databases and enhancing our data processing capabilities. Develop POCs to influence platform architects, product managers and software engineers to validate solution proposals and migrate. Develop data lake solution to store structured and unstructured data from internal and external sources and provide technical guidance to help migrate colleagues to modern technology platform. Contribute and adhere to CI/CD processes, development best practices and strengthen the discipline in Data Engineering Org. Develop systems that ingest, cleanse and normalize diverse datasets, develop data pipelines from various internal and external sources and build structure for previously unstructured data. Using PySpark and Spark SQL, extract, manipulate, and transform data from various sources, such as databases, data lakes, APIs, and files, to prepare it for analysis and modeling. Build and optimize ETL workflows using Azure Databricks and PySpark. This includes developing efficient data processing pipelines, data validation, error handling, and performance tuning. Perform the unit testing, system integration testing, regression testing and assist with user acceptance testing. Articulates business requirements in a technical solution that can be designed and engineered. Consults with the business to develop documentation and communication materials to ensure accurate usage and interpretation of JLL data. Implement data security best practices, including data encryption, access controls, and compliance with data protection regulations. Ensure data privacy, confidentiality, and integrity throughout the data engineering processes. Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues. Experience & Education Minimum of 4 years of experience as a data developer using Python, PySpark, Spark Sql, ETL knowledge, SQL Server, ETL Concepts. Bachelors degree in Information Science, Computer Science, Mathematics, Statistics or a quantitative discipline in science, business, or social science. Experience in Azure Cloud Platform, Databricks, Azure storage. Effective written and verbal communication skills, including technical writing. Excellent technical, analytical and organizational skills. Technical Skills & Competencies Experience handling un-structured, semi-structured data, working in a data lake environment, leveraging data streaming and developing data pipelines driven by events/queues Hands on Experience and knowledge on real time/near real time processing and ready to code Hands on Experience in PySpark, Databricks, and Spark Sql. Knowledge on json, Parquet and Other file format and work effectively with them No Sql Databases Knowledge like Hbase, Mongo, Cosmos etc. Preferred Cloud Experience on Azure or AWS Python-spark, Spark Streaming, Azure SQL Server, Cosmos DB/Mongo DB, Azure Event Hubs, Azure Data Lake Storage, Azure Search etc. Team player, Reliable, self-motivated, and self-disciplined individual capable of executing on multiple projects simultaneously within a fast-paced environment working with cross functional teams.

Posted 2 weeks ago

Apply

5.0 years

7 Lacs

Hyderabad

Work from Office

Naukri logo

Design, implement, and optimize Big Data solutions using Hadoop and Scala. You will manage data processing pipelines, ensure data integrity, and perform data analysis. Expertise in Hadoop ecosystem, Scala programming, and data modeling is essential for this role.

Posted 2 weeks ago

Apply

2.0 - 6.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Naukri logo

The Apache Spark, Digital :Scala role involves working with relevant technologies, ensuring smooth operations, and contributing to business objectives. Responsibilities include analysis, development, implementation, and troubleshooting within the Apache Spark, Digital :Scala domain.

Posted 2 weeks ago

Apply

2.0 - 5.0 years

4 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

The Digital :BigData and Hadoop Ecosystems, Digital :PySpark role involves working with relevant technologies, ensuring smooth operations, and contributing to business objectives. Responsibilities include analysis, development, implementation, and troubleshooting within the Digital :BigData and Hadoop Ecosystems, Digital :PySpark domain.

Posted 2 weeks ago

Apply

2.0 - 4.0 years

4 - 6 Lacs

Bengaluru

Work from Office

Naukri logo

The Big Data (Scala, HIVE) role involves working with relevant technologies, ensuring smooth operations, and contributing to business objectives. Responsibilities include analysis, development, implementation, and troubleshooting within the Big Data (Scala, HIVE) domain.

Posted 2 weeks ago

Apply

2.0 - 4.0 years

4 - 6 Lacs

Bengaluru

Work from Office

Naukri logo

The Digital :Scala role involves working with relevant technologies, ensuring smooth operations, and contributing to business objectives. Responsibilities include analysis, development, implementation, and troubleshooting within the Digital :Scala domain.

Posted 2 weeks ago

Apply

2.0 - 5.0 years

4 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

The Big Data (PySpark, Hive) role involves working with relevant technologies, ensuring smooth operations, and contributing to business objectives. Responsibilities include analysis, development, implementation, and troubleshooting within the Big Data (PySpark, Hive) domain.

Posted 2 weeks ago

Apply

2.0 - 5.0 years

4 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

The PySpark role involves working with relevant technologies, ensuring smooth operations, and contributing to business objectives. Responsibilities include analysis, development, implementation, and troubleshooting within the PySpark domain.

Posted 2 weeks ago

Apply

2.0 - 5.0 years

4 - 7 Lacs

Hyderabad

Work from Office

Naukri logo

The PySpark role involves working with relevant technologies, ensuring smooth operations, and contributing to business objectives. Responsibilities include analysis, development, implementation, and troubleshooting within the PySpark domain.

Posted 2 weeks ago

Apply

2.0 - 4.0 years

4 - 6 Lacs

Chennai

Work from Office

Naukri logo

The Big Data (Scala, HIVE) role involves working with relevant technologies, ensuring smooth operations, and contributing to business objectives. Responsibilities include analysis, development, implementation, and troubleshooting within the Big Data (Scala, HIVE) domain.

Posted 2 weeks ago

Apply

2.0 - 5.0 years

4 - 7 Lacs

Chennai

Work from Office

Naukri logo

The Big Data (PySPark, Python) role involves working with relevant technologies, ensuring smooth operations, and contributing to business objectives. Responsibilities include analysis, development, implementation, and troubleshooting within the Big Data (PySPark, Python) domain.

Posted 2 weeks ago

Apply

2.0 - 5.0 years

4 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

The Digital :BigData and Hadoop Ecosystems, Digital :Kafka role involves working with relevant technologies, ensuring smooth operations, and contributing to business objectives. Responsibilities include analysis, development, implementation, and troubleshooting within the Digital :BigData and Hadoop Ecosystems, Digital :Kafka domain.

Posted 2 weeks ago

Apply

2.0 - 5.0 years

14 - 17 Lacs

Bengaluru

Work from Office

Naukri logo

As a BigData Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Big Data Developer, Hadoop, Hive, Spark, PySpark, Strong SQL. Ability to incorporate a variety of statistical and machine learning techniques. Basic understanding of Cloud (AWS,Azure, etc). Ability to use programming languages like Java, Python, Scala, etc., to build pipelines to extract and transform data from a repository to a data consumer Ability to use Extract, Transform, and Load (ETL) tools and/or data integration, or federation tools to prepare and transform data as needed. Ability to use leading edge tools such as Linux, SQL, Python, Spark, Hadoop and Java Preferred technical and professional experience Basic understanding or experience with predictive/prescriptive modeling skills You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions

Posted 2 weeks ago

Apply

8.0 - 10.0 years

10 - 12 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Naukri logo

Big Data Engineer (Remote, Contract 6 Months+) ig Data ecosystem including Snowflake (Snowpark), Spark, MapReduce, Hadoop, and more. We are looking for a Senior Big Data Engineer with deep expertise in large-scale data processing technologies and frameworks. This is a remote, contract-based position suited for a data engineering expert with strong experience in the Big Data ecosystem including Snowflake (Snowpark), Spark, MapReduce, Hadoop, and more. #KeyResponsibilities Design, develop, and maintain scalable data pipelines and big data solutions. Implement data transformations using Spark, Snowflake (Snowpark), Pig, and Sqoop. Process large data volumes from diverse sources using Hadoop ecosystem tools. Build end-to-end data workflows for batch and streaming pipelines. Optimize data storage and retrieval processes in HBase, Hive, and other NoSQL databases. Collaborate with data scientists and business stakeholders to design robust data infrastructure. Ensure data integrity, consistency, and security in line with organizational policies. Troubleshoot and tune performance for distributed systems and applications. #MustHaveSkills in Data Engineering / Big Data Tools: Snowflake (Snowpark), Spark, MapReduce, Hadoop, Sqoop, Pig, HBase Data Ingestion & ETL, Data Pipeline Design, Distributed Computing Strong understanding of Big Data architectures & performance tuning Hands-on experience with large-scale data storage and query optimization #NiceToHave Apache Airflow / Oozie experience Knowledge of cloud platforms (AWS, Azure, or GCP) Proficiency in Python or Scala CI/CD and DevOps exposure #ContractDetails Role: Senior Big Data Engineer Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote Duration: 6+ Months (Contract) Apply via Email: navaneeta@suzva.com Contact: 9032956160 #HowToApply Send your updated resume with the subject: "Application for Remote Big Data Engineer Contract Role" Include in your email: Updated Resume Current CTC Expected CTC Current Location Notice Period / Availabilit

Posted 2 weeks ago

Apply

5.0 - 7.0 years

12 - 16 Lacs

Pune

Work from Office

Naukri logo

0px> In one sentence Responsible for the design, development, modification, debugging and/or maintenance of software systems. Works on specific modules, applications or technologies, and deals with sophisticated assignments during the software development process. All you need is... 5 - 7 years of proven experience in a software engineering Specialist Familiar with Agile concepts is must. Strong Development experience and expertise in core C++, Shell Scripting Good analytical skills. Product Knowledge in Turbo charging, Rating Logic Should have knowledge of either relational / non-relational database, SQL/PG & Hbase Should be good in communication and team engagement skills. Must be comfortable working in a fast-paced environment. Any experience of Microservice architecture and DevOps knowledge would be an added advantage. Any experience on Elastic Search, Kafka, design pattern would be an added advantage. What will your job look like? You will provide technical leadership to software engineers by coaching and mentoring throughout end-to-end software development, maintenance, and lifecycle to achieve project goals to the required level of quality; promote team engagement and motivation. Provide recommendations to the software engineering manager for estimates, resource needs, breakthroughs and risks; ensure effective delegation, supervising tasks, identifying risks and handling mitigation and critical issues. Hands-on technical and functional mentorship to design, maintenance, build, integration and testing of sophisticated software components according to functional and technical design specifications; Follow software development methodologies and release processes You will analyze and report the requirements and provide impact assessment for new features or bug fixes. Make high-level design and establishes technical standards. You will represent and lead discussions related to product/ application/ modules/ team and build relationships with internal customers/partners You will implement quality processes (such as performing technical root cause analysis and outlining corrective action for given problems), measure them and takes corrective actions in case of variances and ensure all the project agreed work are completed to the required level of quality. Who are we? Why you will love this job: The chance to serve as a specialist in software and technology. You will take an active role in technical mentoring within the team. We provide stellar benefits from health to dental to paid time off and parental leave!

Posted 2 weeks ago

Apply

3.0 - 6.0 years

13 - 23 Lacs

Gurugram

Work from Office

Naukri logo

Looking for an experienced Big Data Developer to develop, maintain, and optimize our big data solutions. Experience in Java, Spark, API development, Hadoop, HDFS, Hive, HBase, Kafka

Posted 2 weeks ago

Apply

5.0 - 10.0 years

14 - 17 Lacs

Pune

Work from Office

Naukri logo

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala

Posted 2 weeks ago

Apply

8.0 - 11.0 years

35 - 37 Lacs

Kolkata, Ahmedabad, Bengaluru

Work from Office

Naukri logo

Dear Candidate, We are hiring a Clojure Developer to build modern applications with simplicity, immutability, and strong functional design. Key Responsibilities: Write functional code using Clojure and ClojureScript Develop APIs, web apps, or backends using Ring , Compojure , or Luminus Work with immutable data structures and REPL-driven development Build scalable microservices or event-driven systems Maintain clean, modular, and expressive codebases Required Skills & Qualifications: Strong experience with Clojure , Leiningen , and core.async Familiarity with functional programming and persistent data structures Experience integrating with Java , Datomic , or Kafka Bonus: Frontend experience with Reagent or Re-frame Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Reddy Delivery Manager Integra Technologies

Posted 2 weeks ago

Apply

1.0 - 5.0 years

11 - 15 Lacs

Noida

Work from Office

Naukri logo

You will spend time in ensuring the products have best technical design and architecture; you would be supported by peers and team members in creating best-in-class technical solutions. Identify technical challenges proactively and provide effective solutions to overcome them, ensuring the successful implementation of features and functionality. Quickly respond to business needs and client facing teams demand for features, enhancements and bug fixes. Work with senior Ripik.AI tech and AI leaders in shaping and scaling the software products and Ripiks proprietary platform for hosting manufacturing focussed AI and ML software products Required Skills & Experience You should have 3+ years of experience, with deep expertise in Java, Golang & Python. Must have: Expert in coding for business logic, server scripts and application programming interfaces (APIs) Excellent in writing optimal SQL queries for backend databases; CRUD operations for databases from applications. Exposure to relational databases : MYSQL, Postgres DB, non-relational: MongoDB, Graph based databases, HBASE, Cloud native big data stores; willing to learn and ramp up on multiple database technologies . Must have at least 1 public cloud platform experience (GCP/Azure/AWS; GCP preferred). Good to have: Basic knowledge of Advanced Analytics / Machine learning / Artificial intelligence (has to collaborate with ML engineers to build backend of AI-enabled apps)

Posted 2 weeks ago

Apply

6.0 - 8.0 years

8 - 13 Lacs

Bengaluru

Work from Office

Naukri logo

What youll be doing... Turn ideas into innovative products with design, development, deployment and support throughout the product/software development life cycle. Develop key software components of high-quality products. Participate in requirement gathering, idea validation, and concept prototyping. Design end and end solutions to bring ideas into innovative products. Refine product designs to provide an excellent user experience. Develop/code key software components of products. Integrate key software components with various network systems like Messaging, Calling, Address Book, Billing, and Provisioning. Work with system engineers to create system/network designs and architecture. Work with performance engineers to refine software design and codes to improve performance and capacity. Use agile and iterative methods to demo product features and refine the user experience. What were looking for... Youll need to have: Bachelor's degree or six or more years of work experience. experience in developing software products. Experience with agile software development. Advanced knowledge of application, data and infrastructure architecture disciplines. Understanding of architecture and design across all systems. Working proficiency in development toolsets. Experience with Java/J2EE, Springboot/MVC, JMS Kafka. Developing front end website architecture. Experience with developing frameworks such as ReactJS or AngularJS. Proficiency with front end languages such as HTML, CSS and JavaScript. Designing and developing APIs. Ability to gather requirements and provide solutions independently. Knowledge of Database (Oracle), Linux/Unix, NOSQL DB (e.g. MongoDB, HBase), caching mechanism, Load Balancing, multi-data center architecture. Knowledge of Microservice Architecture, Cloud Computing, Docker Containers, Restful API, EKS. Familiarity with developing and deploying services in AWS. Knowledge of Object-Oriented Design, Agile Scrum, Test Driven Development. Knowledge of Linux and troubleshooting. Even better if you have one or more of following: Good writer and verbal communication, listening, negotiation and presentation skills. Knowledge/Exposure/Expertise in Go Programming Language is a plus.

Posted 2 weeks ago

Apply

2.0 - 5.0 years

4 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

Job Summary Person at this position has gained significant work experience to be able to apply their knowledge effectively and deliver results. Person at this position is also able to demonstrate the ability to analyse and interpret complex problems and improve change or adapt existing methods to solve the problem. Person at this position regularly interacts with interfacing groups / customer on technical issue clarification and resolves the issues. Also participates actively in important project/ work related activities and contributes towards identifying important issues and risks. Reaches out for guidance and advice to ensure high quality of deliverables. Person at this position consistently seek opportunities to enhance their existing skills, acquire more complex skills and work towards enhancing their proficiency level in their field of specialisation. Works under limited supervision of Team Lead/ Project Manager. Roles & Responsibilities Responsible for design, coding, testing, bug fixing, documentation and technical support in the assigned area. Responsible for on time delivery while adhering to quality and productivity goals. Responsible for adhering to guidelines and checklists for all deliverable reviews, sending status report to team lead and following relevant organizational processes. Responsible for customer collaboration and interactions and support to customer queries. Expected to enhance technical capabilities by attending trainings, self-study and periodic technical assessments. Expected to participate in technical initiatives related to project and organization and deliver training as per plan and quality. Education and Experience Required Engineering graduate, MCA, etc Experience: 2-5 years Competencies Description Data engineering TCB is applicable to one who 1) Creates databases and storage for relational and non-relational data sources 2) Develops data pipelines (ETL/ ELT) to clean , transform and merge data sources into usable format 3) Creates reporting layer with pre-packaged scheduled reports , Dashboards and Charts for self-service BI 4) Has experience on cloud platforms such as AWS, Azure , GCP in implementing data workflows 5) Experience with tools like MongoDB, Hive, Hbase, Spark, Tableau, PowerBI, Python, Scala, SQL, ElasticSearch etc. Platforms- AWS, Azure , GCP Technology Standard- NA Tools- MongoDB, Hive, Hbase, Tableau, PowerBI, ElasticSearch, Qlikview Languages- Python, R, Spark,Scala, SQL Specialization- DWH, BIG DATA ENGINEERING, EDGE ANALYTICS

Posted 2 weeks ago

Apply

11.0 - 15.0 years

50 - 100 Lacs

Hyderabad

Work from Office

Naukri logo

Uber is looking for Staff Software Engineer - Data to join our dynamic team and embark on a rewarding career journey Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies