Jobs
Interviews

921 Sqoop Jobs - Page 18

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

6 - 9 Lacs

Hyderabad

Hybrid

1. Understanding the NBA requirements. 2. Provide Subject matter expertise in relation to Pega CDH from technology perspective. 3. Participate actively in the creation and review of the Conceptual Design, Detailed Design, and estimations. 4. Implementing the NBAs as per agreed requirement/solution 5. Supporting the end-to-end testing and provide fixes with quick TAT. 6. Deployment knowledge to manage the implementation activities. 7. Experience in Pega CDH v8.8 multi app or 24.1 and retail banking domain is preferred. 8. Good communication skills Bangalore/Chennai/Hyderabad/Kolkata _ #Notice Period Immediate #Employment TypeContract

Posted 1 month ago

Apply

7.0 - 12.0 years

4 - 8 Lacs

Pune

Hybrid

Should be capable of developing/configuring data pipelines in a variety of platforms and technologies Possess the following technical skills SQL, Python, Pyspark, Hive, ETL, Unix, Control-M (or similar scheduling tools) Can demonstrate strong experience in writing complex SQL queries to perform data analysis on Databases SQL server, Oracle, HIVE etc. Experience with GCP (particularly Airflow, Dataproc, Big Query) is an advantage Have experience with creating solutions which power AI/ML models and generative AI Ability to work independently on specialized assignments within the context of project deliverables Take ownership of providing solutions and tools that iteratively increase engineering efficiencies Be capable of creating designs which help embed standard processes, systems and operational models into the BAU approach for end-to-end execution of data pipelines Be able to demonstrate problem solving and analytical abilities including the ability to critically evaluate information gathered from multiple sources, reconcile conflicts, decompose high-level information into details and apply sound business and technical domain knowledge Communicate openly and honestly using sophisticated oral, written and visual communication and presentation skills - the ability to communicate efficiently at a global level is paramount Ability to deliver materials of the highest quality to management against tight deadlines Ability to work effectively under pressure with competing and rapidly changing priorities RegardsIT RecruiterIDESLABS (P) LTDsrilakshmi.k@ideslabs.com|

Posted 1 month ago

Apply

6.0 - 11.0 years

5 - 9 Lacs

Hyderabad, Bengaluru

Work from Office

Skill-Snowflake Developer with Data Build Tool with ADF with Python Job Descripion: We are looking for a Data Engineer with experience in data warehouse projects, strong expertise in Snowflake , and hands-on knowledge of Azure Data Factory (ADF) and dbt (Data Build Tool). Proficiency in Python scripting will be an added advantage. Key Responsibilities Design, develop, and optimize data pipelines and ETL processes for data warehousing projects. Work extensively with Snowflake, ensuring efficient data modeling, and query optimization. Develop and manage data workflows using Azure Data Factory (ADF) for seamless data integration. Implement data transformations, testing, and documentation using dbt. Collaborate with cross-functional teams to ensure data accuracy, consistency, and security. Troubleshoot data-related issues. (Optional) Utilize Python for scripting, automation, and data processing tasks. Required Skills & Qualifications Experience in Data Warehousing with a strong understanding of best practices. Hands-on experience with Snowflake (Data Modeling, Query Optimization). Proficiency in Azure Data Factory (ADF) for data pipeline development. Strong working knowledge of dbt (Data Build Tool) for data transformations. (Optional) Experience in Python scripting for automation and data manipulation. Good understanding of SQL and query optimization techniques. Experience in cloud-based data solutions (Azure). Strong problem-solving skills and ability to work in a fast-paced environment. Experience with CI/CD pipelines for data engineering.

Posted 1 month ago

Apply

5.0 - 9.0 years

6 - 9 Lacs

Bengaluru

Work from Office

Looking for senior pyspark developer with 6+ years of hands on experienceBuild and manage large scale data solutions using tools like Pyspark, Hadoop, Hive, Python & SQLCreate workflows to process data using IBM TWSAble to use pyspark to create different reports and handle large datasetsUse HQL/SQL/Hive for ad-hoc query data and generate reports, and store data in HDFS Able to deploy code using Bitbucket, Pycharm and Teamcity.Can manage folks, able to communicate with several teams and can explain problem/solutions to business team in non-tech manner -Primary Skill Pyspark-Hadoop-Spark - One to Three Years,Developer / Software Engineer

Posted 1 month ago

Apply

6.0 - 9.0 years

8 - 11 Lacs

Hyderabad

Work from Office

Mandatory skill ETL_GCP_Bigquery Develop, implement, and optimize ETL/ELT pipelines for processing large datasets efficiently. Work extensively with BigQuery for data processing, querying, and optimization. Utilize Cloud Storage, Cloud Logging, Dataproc, and Pub/Sub for data ingestion, storage, and event-driven processing. Perform performance tuning and testing of the ELT platform to ensure high efficiency and scalability. Debug technical issues, perform root cause analysis, and provide solutions for production incidents. Ensure data quality, accuracy, and integrity across data pipelines. Collaborate with cross-functional teams to define technical requirements and deliver solutions. Work independently on assigned tasks while maintaining high levels of productivity and efficiency. Skills Required Proficiency in SQL and PL/SQL for querying and manipulating data. Experience in Python for data processing and automation. Hands-on experience with Google Cloud Platform (GCP), particularly o BigQuery (must-have) o Cloud Storage o Cloud Logging o Dataproc o Pub/Sub Experience with GitHub and CI/CD pipelines for automation and deployment. Performance tuning and performance testing of ELT processes. Strong analytical and debugging skills to resolve data and pipeline issues efficiently. Self-motivated and able to work independently as an individual contributor. Good understanding of data modeling, database design, and data warehousing concepts.

Posted 1 month ago

Apply

3.0 years

0 Lacs

Gurgaon, Haryana, India

Remote

About This Role Aladdin Engineering is seeking a talented, hands-on Data Engineer to join its Regulatory Tech team. The Regulatory Tech team provides a comprehensive surveillance solution for Compliance that helps the firm protect itself against market manipulation, fraud and other financial related misconducts. Our product is widely used in the firm and is going through a series of feature buildouts so that it can be offered to external clients. We are facing a lot of potential and exciting times ahead. As a team, we nurture and develop a culture that is: Curious: We like to learn new things and have a healthy disrespect for the status quo Brave: We are willing to get outside your comfort zone Passionate: We feel personal ownership of your work, and strive to be better Open: We value and respect other's opinions Innovative: We conceptualize, design and implement new capabilities to ensure that Aladdin remains the best platform. We are seeking an ambitious professional having strong technical experience in data engineering. You have a solid understanding of the software development lifecycle and enjoy working in a team of engineers. The ideal candidate shows aptitude to evaluate and incorporate new technologies. You thrive in a work environment that requires creative problem-solving skills, independent self-direction, open communication and attention to details. You are a self-starter, comfortable with ambiguity and working in a fast-paced, ever-changing environment. You are passionate about bringing value to clients. As Member Of The Regulatory Tech Team, You Will Work with engineers, project managers, technical leads, business owners and analysts throughout the whole SDLC Design and implement new features in our core product’s data platform and suspicious activity identifying mechanism Be brave enough to come up with ideas to improve resiliency, stability and performance of our platform Participate in setting coding standards and guidelines, identify and document standard methodologies Desired Skills And Experience 3+ years of hands-on experience with Python and SQL Experience with Snowflake database Experience with Airflow Thorough knowledge of GIT, CI/CD and unit/end-to-end testing Interest in data engineering Solid written and verbal communication skills Nice To Have Experience with DBT, Great Expectations frameworks Experience with Big Data technologies (Spark, Sqoop, HDFS, YARN) Experience with Agile development Our Benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law.

Posted 1 month ago

Apply

6.0 - 8.0 years

8 - 10 Lacs

Mumbai

Work from Office

Design and implement data architecture and models for Big Data solutions using MapR and Hadoop ecosystems. You will optimize data storage, ensure data scalability, and manage complex data workflows. Expertise in Big Data, Hadoop, and MapR architecture is required for this position.

Posted 1 month ago

Apply

4.0 - 6.0 years

6 - 8 Lacs

Chennai

Work from Office

Design and implement Big Data solutions using Hadoop and MapR ecosystem. You will work with data processing frameworks like Hive, Pig, and MapReduce to manage and analyze large data sets. Expertise in Hadoop and MapR is required.

Posted 1 month ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Chennai

Work from Office

Design, implement, and optimize Big Data solutions using Hadoop technologies. You will work on data ingestion, processing, and storage, ensuring efficient data pipelines. Strong expertise in Hadoop, HDFS, and MapReduce is essential for this role.

Posted 1 month ago

Apply

4.0 - 6.0 years

6 - 8 Lacs

Mumbai

Work from Office

Develops data processing solutions using Scala and PySpark.

Posted 1 month ago

Apply

6.0 - 8.0 years

8 - 10 Lacs

Mumbai

Work from Office

Design and implement big data solutions using Hadoop ecosystem tools like MapR. Develop data models, optimize data storage, and ensure seamless integration of big data technologies into enterprise systems.

Posted 1 month ago

Apply

4.0 - 6.0 years

6 - 8 Lacs

Mumbai

Work from Office

Develop and maintain data-driven applications using Scala and PySpark. Work with large datasets, performing data analysis, building data pipelines, and optimizing performance.

Posted 1 month ago

Apply

4.0 - 5.0 years

6 - 7 Lacs

Hyderabad

Work from Office

Design, develop, and maintain data pipelines and data management solutions. Ensure efficient data collection, transformation, and storage for analysis and reporting.

Posted 1 month ago

Apply

4.0 - 5.0 years

6 - 7 Lacs

Hyderabad

Work from Office

Specializes in Public Key Infrastructure (PKI) implementation and certificate management. Responsibilities include configuring digital certificates, managing encryption protocols, and ensuring secure communication channels. Expertise in SSL/TLS, HSMs, and identity management is required.

Posted 1 month ago

Apply

4.0 - 5.0 years

6 - 7 Lacs

Bengaluru

Work from Office

Develop and optimize data pipelines using Databricks and PySpark. Process large-scale data for analytics and reporting. Implement best practices for ETL and data warehousing.

Posted 1 month ago

Apply

4.0 - 5.0 years

6 - 7 Lacs

Bengaluru

Work from Office

Develop and manage data pipelines using Snowflake. Optimize performance and data warehousing strategies.

Posted 1 month ago

Apply

6.0 - 10.0 years

14 - 19 Lacs

Coimbatore

Work from Office

We are seeking a Senior Data & AI/ML Engineer with deep expertise in GCP, who will not only build intelligent and scalable data solutions but also champion our internal capability building and partner-level excellence.. This is a high-impact role for a seasoned engineer who thrives in designing GCP-native AI/ML-enabled data platforms. You'll play a dual role as a hands-on technical lead and a strategic enabler, helping drive our Google Cloud Data & AI/ML specialization track forward through successful implementations, reusable assets, and internal skill development.. Preferred Qualification. GCP Professional Certifications: Data Engineer or Machine Learning Engineer.. Experience contributing to a GCP Partner specialization journey.. Familiarity with Looker, Data Catalog, Dataform, or other GCP data ecosystem tools.. Knowledge of data privacy, model explainability, and AI governance is a plus.. Work Location: Remote. Key Responsibilities. Data & AI/ML Architecture. Design and implement data architectures for real-time and batch pipelines, leveraging GCP services such as BigQuery, Dataflow, Dataproc, Pub/Sub, Vertex AI, and Cloud Storage.. Lead the development of ML pipelines, from feature engineering to model training and deployment using Vertex AI, AI Platform, and Kubeflow Pipelines.. Collaborate with data scientists to operationalize ML models and support MLOps practices using Cloud Functions, CI/CD, and Model Registry.. Define and implement data governance, lineage, monitoring, and quality frameworks.. Google Cloud Partner Enablement. Build and document GCP-native solutions and architectures that can be used for case studies and specialization submissions.. Lead client-facing PoCs or MVPs to showcase AI/ML capabilities using GCP.. Contribute to building repeatable solution accelerators in Data & AI/ML.. Work with the leadership team to align with Google Cloud Partner Program metrics.. Team Development. Mentor engineers and data scientists toward achieving GCP certifications, especially in Data Engineering and Machine Learning.. Organize and lead internal GCP AI/ML enablement sessions.. Represent the company in Google partner ecosystem events, tech talks, and joint GTM engagements.. What We Offer. Best-in-class packages.. Paid holidays and flexible time-off policies.. Casual dress code and a flexible working environment.. Opportunities for professional development in an engaging, fast-paced environment.. Medical insurance covering self and family up to 4 lakhs per person.. Diverse and multicultural work environment..

Posted 1 month ago

Apply

6.0 - 11.0 years

10 - 14 Lacs

Hyderabad, Pune, Chennai

Work from Office

Job type: contract to hire 10+ years of software development experience building large scale distributed data processing systems/application, Data Engineering or large scale internet systems. Experience of at least 4 years in Developing/ Leading Big Data solution at enterprise scale with at least one end to end implementation Strong experience in programming languages Java/J2EE/Scala. Good experience in Spark/Hadoop/HDFS Architecture, YARN, Confluent Kafka , Hbase, Hive, Impala and NoSQL database. Experience with Batch Processing and AutoSys Job Scheduling and Monitoring Performance analysis, troubleshooting and resolution (this includes familiarity and investigation of Cloudera/Hadoop logs) Work with Cloudera on open issues that would result in cluster configuration changes and then implement as needed Strong experience with databases such as SQL,Hive, Elasticsearch, HBase, etc Knowledge of Hadoop Security, Data Management and Governance Primary Skills: Java/Scala, ETL, Spark, Hadoop, Hive, Impala, Sqoop, HBase, Confluent Kafka, Oracle, Linux, Git, Jenkins CI/CD

Posted 1 month ago

Apply

6.0 - 11.0 years

9 - 13 Lacs

Hyderabad

Work from Office

GCPdata engineer Big Query SQL Python Talend ETL ProgrammerGCPor Any Cloud technology. Experienced inGCPdata engineer Big Query SQL Python Talend ETL ProgrammerGCPor Any Cloud technology. Good experience in building the pipeline ofGCPComponents to load the data into Big Query and to cloud storage buckets. Excellent Data Analysis skills. Good written and oral communication skills Self-motivated able to work independently

Posted 1 month ago

Apply

4.0 - 9.0 years

4 - 7 Lacs

Bengaluru

Work from Office

Immediate job opening for # Python+SQL_C2H_Pan India. #Skill:Python+SQL #Job description: Strong programming skills in Python programming and advance SQL. strong experience in NumPy, Pandas, Data frames Strong analytical and problem-solving skills. Excellent communication and collaboration abilities.

Posted 1 month ago

Apply

6.0 - 11.0 years

5 - 8 Lacs

Bengaluru

Work from Office

Experience in Cloud platform, e.g., AWS, GCP, Azure, etc. Experience in distributed technology tools, viz. SQL, Spark, Python, PySpark, Scala Performance Turing Optimize SQL, PySpark for performance Airflow workflow scheduling tool for creating data pipelines GitHub source control tool & experience with creating/ configuring Jenkins pipeline Experience in EMR/ EC2, Databricks etc. DWH tools incl. SQL database, Presto, and Snowflake Streaming, Serverless Architecture

Posted 1 month ago

Apply

5.0 - 10.0 years

3 - 7 Lacs

Hyderabad

Work from Office

5+ Years of experience in developing Snowflake data models, data ingestion, views, Stored procedures, complex queries Good experience in SQL Experience in Informatica Power center / IICS ETL tools Testing and clearly document implementations, so others can easily understand the requirements, implementation, and test conditions Provide production support for Data Warehouse issues such data load problems, transformation translation problems Ability to facilitate and coordinate discussion and to manage expectations of multiple stakeholders Candidate must have good communication and facilitation skills Work in an onsite-offshore model involving daily interactions with Onshore teams to ensure on-time quality deliverables

Posted 1 month ago

Apply

7.0 - 12.0 years

6 - 10 Lacs

Noida, Bengaluru

Work from Office

About the Role: Grade Level (for internal use): 10 Responsibilities: To work closely with various stakeholders to collect, clean, model and visualise datasets. To create data driven insights by researching, designing and implementing ML models to deliver insights and implement action-oriented solutions to complex business problems To drive ground-breaking ML technology within the Modelling and Data Science team. To extract hidden value insights and enrich accuracy of the datasets. To leverage technology and automate workflows creating modernized operational processes aligning with the team strategy. To understand, implement, manage, and maintain analytical solutions & techniques independently. To collaborate and coordinate with Data, content and modelling teams and provide analytical assistance of various commodity datasets To drive and maintain high quality processes and delivering projects in collaborative Agile team environments. : 7+ years of programming experience particularly in Python 4+ years of experience working with SQL or NoSQL databases. 1+ years of experience working with Pyspark. University degree in Computer Science, Engineering, Mathematics, or related disciplines. Strong understanding of big data technologies such as Hadoop, Spark, or Kafka. Demonstrated ability to design and implement end-to-end scalable and performant data pipelines. Experience with workflow management platforms like Airflow. Strong analytical and problem-solving skills. Ability to collaborate and communicate effectively with both technical and non-technical stakeholders. Experience building solutions and working in the Agile working environment Experience working with git or other source control tools Strong understanding of Object-Oriented Programming (OOP) principles and design patterns. Knowledge of clean code practices and the ability to write well-documented, modular, and reusable code. Strong focus on performance optimization and writing efficient, scalable code. Nice to have: Experience working with Oil, gas and energy markets Experience working with BI Visualization applications (e.g. Tableau, Power BI) Understanding of cloud-based services, preferably AWS Experience working with Unified analytics platforms like Databricks Experience with deep learning and related toolkitsTensorflow, PyTorch, Keras, etc. About S&P Global Commodity Insights At S&P Global Commodity Insights, our complete view of global energy and commodities markets enables our customers to make decisions with conviction and create long-term, sustainable value. Were a trusted connector that brings together thought leaders, market participants, governments, and regulators to co-create solutions that lead to progress. Vital to navigating Energy Transition, S&P Global Commodity Insights coverage includes oil and gas, power, chemicals, metals, agriculture and shipping. S&P Global Commodity Insights is a division of S&P Global (NYSESPGI). S&P Global is the worlds foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the worlds leading organizations navigate the economic landscape so they can plan for tomorrow, today.For more information, visit http://www.spglobal.com/commodity-insights . Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, pre-employment training or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Location - Bengaluru,Noida,Uttarpradesh,Hyderabad

Posted 1 month ago

Apply

5.0 - 10.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Big data (Hadoop and Spark) skills. Programming language: Python, Scala Job requirement This position is for a mid-level data engineer with development experience who will focus on creating new capabilities in the Risk space while maturing our code base and development processes. Qualifications: 3 or more years of work experience with a bachelors degree or more than 2 years of work experience with an Advanced Degree (e.g. Masters, MBA, JD, MD) Experience in creating data driven business solutions and solving data problems using a wide variety of technologies such as Hadoop, Hive, Spark, MongoDB, NoSQL, as well as traditional data technologies like RDBMS, MySQL a plus Ability to program in one or more scripting languages such as Perl or Python and one or more programming languages such as Java or Scala Experience with data visualization and business intelligence tools like Tableau is a plus Experience with or knowledge of Continuous Integration & Development and automation tools such as Jenkins, Artifactory, Git etc. Experience with or knowledge of Agile and Test-Driven Development methodology Strong analytical skills with excellent problem-solving ability

Posted 1 month ago

Apply

5.0 - 10.0 years

5 - 9 Lacs

Hyderabad

Work from Office

4+ years of hands on experience using Azure Cloud, ADLS, ADF & Databricks Finance Domain Data Stewardship Finance Data Reconciliation with SAP down-stream systems Run/Monitor Pipelines/ Validate the Data Bricks note books Able to interface with onsite/ business stake holders. Python, SQL Hands on Knowledge of Snowflake/DW is desirable.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies