Jobs
Interviews

6241 Scala Jobs - Page 46

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

0 Lacs

karnataka

On-site

The ideal candidate for this position should have at least 5 years of experience in designing, developing, and maintaining scalable data pipelines using Spark, specifically with PySpark or Spark with Scala. You will be responsible for building data ingestion and transformation frameworks for structured and unstructured data sources, collaborating with data analysts, data scientists, and business stakeholders to understand requirements, and delivering reliable data solutions. Working with large volumes of data, you will ensure quality, integrity, and consistency while optimizing data workflows for performance, scalability, and cost efficiency on cloud platforms such as AWS, Azure, or GCP. Implementing data quality checks and automation for ETL/ELT pipelines will be part of your responsibilities, along with monitoring and troubleshooting data issues in production and performing root cause analysis. Documenting technical processes, system designs, and operational procedures is also a key aspect of this role. Required skills for this position include at least 3 years of experience as a Data Engineer or in a similar role, hands-on experience with PySpark or Spark using Scala, strong knowledge of SQL for data querying and transformation, experience working with any cloud platform (AWS, Azure, or GCP), solid understanding of data warehousing concepts and big data architecture, and experience with version control systems like Git. Desirable skills that would be beneficial for this role include knowledge of Delta Lake, HDFS, or Kafka, familiarity with containerization tools like Docker or Kubernetes, exposure to CI/CD practices and DevOps principles, and an understanding of data governance, security, and compliance standards. Candidates ready to join immediately can share their details via email to nitin.patil@ust.com. Act fast for immediate attention!,

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Greater Kolkata Area

Remote

Job Title : ML Ops Engineer. Experience : 5 Years. Location : Remote (India). Job Overview We are seeking a highly skilled MLOps Engineer with over 5 years of experience in Software Engineering and Machine Learning Operations. The ideal candidate will have hands-on experience with AWS (particularly SageMaker), MLflow, and other MLOps tools, and a strong understanding of building scalable, secure, and production-ready ML systems. Key Responsibilities Design, implement, and maintain scalable MLOps pipelines and infrastructure. Work with cross-functional teams to support end-to-end ML lifecycle including model development, deployment, monitoring, and governance. Leverage AWS services, particularly SageMaker, to manage model training and deployment. Apply best practices for CI/CD, model versioning, reproducibility, and operational monitoring. Participate in MLOps research and help drive innovation across the team. Contribute to the design of secure, reliable, and scalable ML solutions in a production environment. Required Skills 5+ years of experience in Software Engineering and MLOps. Strong experience with AWS, especially SageMaker. Experience with MLflow or similar tools for model tracking and lifecycle management. Familiarity with AWS Data Zone is a strong plus. Proficiency in Python; experience with R, Scala, or Apache Spark is a plus. Solid understanding of software engineering principles, version control, and testing practices. Experience deploying and maintaining models in production environments. Preferred Attributes Strong analytical thinking and problem-solving skills. A proactive mindset with the ability to contribute to MLOps research and process improvements. Self-motivated and able to work effectively in a remote setting. (ref:hirist.tech)

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Description Building off our Cloud momentum, Oracle has formed a new organization - Oracle Health & Analytics. As part of this organization, the candidate will focus on product development and product strategy for Oracle Health, while building out a complete platform supporting modernized, automated healthcare. The organization is constructed with an entrepreneurial spirit that promotes an energetic and creative environment. We are unencumbered and will need your contribution to make it a world class engineering center with the focus on excellence. We are seeking a Software Developer with responsibilities for development, debugging to identify root cause, support and release activities for our service platforms. You will be part of a Dev/Ops team within the Operations Engineering & Analytics (OEA) group. Our enterprise services span on-premise and cloud environments with a heavy focus on data collection, processing and analytics leveraging various cloud services and languages such as Python and SQL. As you build your knowledge in this role, you will have opportunities to share outside your immediate team through mentorship, coaching, technical talks and blogs. Minimum Qualifications: Bachelor's degree in Computer Science (CS), Computer Engineering, CIS, MIS, IS, Software Engineering or related area, or equivalent work experience 5+ years software development experience 3+ years software development experience with Java, Python or Scala 3+ year hands experience with Data Ingestion, Data Orchestration on Cloud Data Platforms such as Oracle Cloud Infrastructure or Microsoft Azure 3+ years experience with Big Data Technologies (i.e. Spark, Hadoop, Map/Reduce, Kafka) 3+ years of CI/CD pipeline (Jenkins, Git,...etc.) Preferred Qualifications: Experience with the Agile development methodology OCI/MSCP certification, Oracle RDBMS, Oracle SQL, Oracle Cloud services etc. Experience in SaaS solutions Experience in Big data Technologies Good communication skills Interest in, and a passion for data analysis, data management and AI/ML Excellent written and verbal communication skills Ability and willingness to: Work directly with key business users and stakeholders to gather requirements for integrations Communicate effectively with diverse people and individuals at various levels within the organization Evaluate, communicate and coordinate the technical impacts of application configuration decisions Career Level : IC3 Responsibilities Architecture and development tasks such as prototyping new services and workloads within the Microsoft Azure cloud computing environment and Oracle Cloud Infrastructure Platform to meet business data analysis needs and requirements for OEA's internal business stakeholders and consumers. The Software Developer IC3 must have SQL, Java, Python, and/or C# programming and experience in big data technologies (OCI, Hadoop, HDFS, Hbase, Map/Reduce, Java, Scala, Apache Kafka, and Apache Spark is preferred.) The ideal candidate is responsible for: Analyzing data requirements and data sources to identify ingestion and integration methods for consolidation and cataloging of data into the CMS. Identifying process improvements at the team level to support operations and development ecosystem. Collaborating on component level technical designs to help derive additional value from our solutions thus benefitting our business stakeholders that rely upon the data for daily business operations. Creating clear, well-constructed designs and documentation for medium to moderate scope stories. Writing code and supporting the services we manage. May also participate in architecture discussions, design sessions and code reviews with other associates that help ensure maintainability, application security and performance. Should be able to coach/mentor junior members technically as well as functionally. Act as a primary contributor in code reviews, and provide feedback on solution level performance improvements. As a member of the team, you will take an active role in the definition and evolution of standard practices and procedures. You will be responsible for defining and developing software for tasks associated with the developing, designing and debugging of software applications or operating systems. Ability to work overtime and participate in an on-call rotation if/when required. Qualifications Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Job Description Building off our Cloud momentum, Oracle has formed a new organization - Oracle Health & Analytics. As part of this organization, the candidate will focus on product development and product strategy for Oracle Health, while building out a complete platform supporting modernized, automated healthcare. The organization is constructed with an entrepreneurial spirit that promotes an energetic and creative environment. We are unencumbered and will need your contribution to make it a world class engineering center with the focus on excellence. We are seeking a Software Developer with responsibilities for development, debugging to identify root cause, support and release activities for our service platforms. You will be part of a Dev/Ops team within the Operations Engineering & Analytics (OEA) group. Our enterprise services span on-premise and cloud environments with a heavy focus on data collection, processing and analytics leveraging various cloud services and languages such as Python and SQL. As you build your knowledge in this role, you will have opportunities to share outside your immediate team through mentorship, coaching, technical talks and blogs. Minimum Qualifications: Bachelor's degree in Computer Science (CS), Computer Engineering, CIS, MIS, IS, Software Engineering or related area, or equivalent work experience 5+ years software development experience 3+ years software development experience with Java, Python or Scala 3+ year hands experience with Data Ingestion, Data Orchestration on Cloud Data Platforms such as Oracle Cloud Infrastructure or Microsoft Azure 3+ years experience with Big Data Technologies (i.e. Spark, Hadoop, Map/Reduce, Kafka) 3+ years of CI/CD pipeline (Jenkins, Git,...etc.) Preferred Qualifications: Experience with the Agile development methodology OCI/MSCP certification, Oracle RDBMS, Oracle SQL, Oracle Cloud services etc. Experience in SaaS solutions Experience in Big data Technologies Good communication skills Interest in, and a passion for data analysis, data management and AI/ML Excellent written and verbal communication skills Ability and willingness to: Work directly with key business users and stakeholders to gather requirements for integrations Communicate effectively with diverse people and individuals at various levels within the organization Evaluate, communicate and coordinate the technical impacts of application configuration decisions Career Level : IC3 Responsibilities Architecture and development tasks such as prototyping new services and workloads within the Microsoft Azure cloud computing environment and Oracle Cloud Infrastructure Platform to meet business data analysis needs and requirements for OEA's internal business stakeholders and consumers. The Software Developer IC3 must have SQL, Java, Python, and/or C# programming and experience in big data technologies (OCI, Hadoop, HDFS, Hbase, Map/Reduce, Java, Scala, Apache Kafka, and Apache Spark is preferred.) The ideal candidate is responsible for: Analyzing data requirements and data sources to identify ingestion and integration methods for consolidation and cataloging of data into the CMS. Identifying process improvements at the team level to support operations and development ecosystem. Collaborating on component level technical designs to help derive additional value from our solutions thus benefitting our business stakeholders that rely upon the data for daily business operations. Creating clear, well-constructed designs and documentation for medium to moderate scope stories. Writing code and supporting the services we manage. May also participate in architecture discussions, design sessions and code reviews with other associates that help ensure maintainability, application security and performance. Should be able to coach/mentor junior members technically as well as functionally. Act as a primary contributor in code reviews, and provide feedback on solution level performance improvements. As a member of the team, you will take an active role in the definition and evolution of standard practices and procedures. You will be responsible for defining and developing software for tasks associated with the developing, designing and debugging of software applications or operating systems. Ability to work overtime and participate in an on-call rotation if/when required. Qualifications Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 3 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

kolkata, west bengal

On-site

Candidates who are ready to join immediately can share their details via email for quick processing to nitin.patil@ust.com. Act fast for immediate attention! With over 5 years of experience, the ideal candidate will be responsible for designing, developing, and maintaining scalable data pipelines using Spark, either PySpark or Spark with Scala. They will also be tasked with building data ingestion and transformation frameworks for structured and unstructured data sources. Collaboration with data analysts, data scientists, and business stakeholders to understand requirements and deliver reliable data solutions is a key aspect of the role. The candidate will work with large volumes of data to ensure quality, integrity, and consistency, optimizing data workflows for performance, scalability, and cost efficiency on cloud platforms such as AWS, Azure, or GCP. Implementation of data quality checks and automation for ETL/ELT pipelines, as well as monitoring and troubleshooting data issues in production, are also part of the responsibilities. Documentation of technical processes, system designs, and operational procedures will be essential. Must-Have Skills: - At least 3 years of experience as a Data Engineer or in a similar role. - Hands-on experience with PySpark or Spark using Scala. - Strong knowledge of SQL for data querying and transformation. - Experience working with any cloud platform (AWS, Azure, or GCP). - Solid understanding of data warehousing concepts and big data architecture. - Experience with version control systems like Git. Good-to-Have Skills: - Experience with data orchestration tools like Apache Airflow, Databricks Workflows, or similar. - Knowledge of Delta Lake, HDFS, or Kafka. - Familiarity with containerization tools (Docker/Kubernetes). - Exposure to CI/CD practices and DevOps principles. - Understanding of data governance, security, and compliance standards.,

Posted 3 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

noida, uttar pradesh

On-site

You will be responsible for building the most personalized and intelligent news experiences for India's next 750 million digital users. As Our Principal Data Engineer, your main tasks will include designing and maintaining data infrastructure to power personalization systems and analytics platforms. This involves ensuring seamless data flow from source to consumption, architecting scalable data pipelines to process massive volumes of user interaction and content data, and developing robust ETL processes for large-scale transformations and analytical processing. You will also be involved in creating and maintaining data lakes/warehouses that consolidate data from multiple sources, optimized for ML model consumption and business intelligence. Additionally, you will implement data governance practices and collaborate with the ML team to ensure the right data availability for recommendation systems. To excel in this role, you should have a Bachelor's or Master's degree in Computer Science, Engineering, Data Science, or a related field, along with 8-12 years of data engineering experience, including at least 3 years in a senior role. You must possess expert-level SQL skills and have strong experience in the Apache Spark ecosystem (Spark SQL, Streaming, SparkML), as well as proficiency in Python/Scala. Experience with the AWS data ecosystem (RedShift, S3, Glue, EMR, Kinesis, Lambda, Athena) and ETL frameworks (Glue, Airflow) is essential. A proven track record of building large-scale data pipelines in production environments, particularly in high-traffic digital media, will be advantageous. Excellent communication skills are also required, as you will need to collaborate effectively across teams in a fast-paced environment that demands engineering agility.,

Posted 3 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

As a Java/Scala Developer at our company located in Hyderabad, you will play a crucial role in developing and maintaining software applications. Your responsibilities will include implementing functional programming methodologies, engaging in test-driven development, and building microservices. Collaboration with cross-functional teams will be essential to ensure the delivery of high-quality software solutions. To excel in this role, you should possess advanced skills in one or more programming languages such as Java and Scala, along with database proficiency. Prior hands-on experience as a Scala/Spark developer is required. You should be able to self-rate your Scala expertise as a minimum of 8 out of 10 and demonstrate proficiency in Scala and Apache Spark development. Additionally, proficiency in automation and continuous delivery methods, as well as a deep understanding of the Software Development Life Cycle, are key requirements for this position. An advanced comprehension of agile methodologies like CI/CD, Application Resiliency, and Security will be beneficial. We are looking for a developer who has demonstrated expertise in software applications and technical processes within a specific technical discipline, such as cloud computing, artificial intelligence, machine learning, or mobile development. If you are passionate about software development and possess the necessary skills and experience, we encourage you to apply for this exciting opportunity.,

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Skills: Microservices, Spring Boot, Multithreading, Core Java, SQL, Messaging queue, kafka, Coding, Position 1: Java Developer Experience - 6 + Years Locations- Noida, Gurgaon, Bangalore Notice Period - Immediate to 15 Days Requirement 6+ years hands-on experience in Java. Experience in building Order and Execution Management, Trading systems is required Financial experience and exposure to Trading In depth understanding of concurrent programming and experience in designing high throughput, high availability, fault tolerant distributed applications is required. Experience in building distributed applications using NoSQL technologies like Cassandra, coordination services like Zookeeper, and caching technologies like Apache Ignite and Redis strongly preferred Experience in building micro services architecture / SOA is required. Experience in message-oriented streaming middleware architecture is preferred (Kafka, MQ, NATS, AMPS) Experience with orchestration, containerization, and building cloud native applications (AWS, Azure) is a plus Experience with modern web technology such as Angular, React, TypeScript a plus Strong analytical and software architecture design skills with an emphasis on test driven development. Experience in programming languages such as Scala, python would be a plus. Experience in using Project Management methodologies such as Agile/Scrum Effective communication and presentation skills (written and verbal) are required Bachelors or masters degree in computer science or engineering Good Communication skills Kindly share below skill matrix for Java Developer along with the profile- skill Matrix Core Java (Mandatory), Multithreading, Spring, Messaging queue, Microservices ,SQL (Mandatory),Coding

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Skills: Microservices, Spring Boot, Multithreading, Core Java, SQL, Messaging queue, kafka, Coding, Position 1: Java Developer Experience - 6 + Years Locations- Noida, Gurgaon, Bangalore Notice Period - Immediate to 15 Days Requirement 6+ years hands-on experience in Java. Experience in building Order and Execution Management, Trading systems is required Financial experience and exposure to Trading In depth understanding of concurrent programming and experience in designing high throughput, high availability, fault tolerant distributed applications is required. Experience in building distributed applications using NoSQL technologies like Cassandra, coordination services like Zookeeper, and caching technologies like Apache Ignite and Redis strongly preferred Experience in building micro services architecture / SOA is required. Experience in message-oriented streaming middleware architecture is preferred (Kafka, MQ, NATS, AMPS) Experience with orchestration, containerization, and building cloud native applications (AWS, Azure) is a plus Experience with modern web technology such as Angular, React, TypeScript a plus Strong analytical and software architecture design skills with an emphasis on test driven development. Experience in programming languages such as Scala, python would be a plus. Experience in using Project Management methodologies such as Agile/Scrum Effective communication and presentation skills (written and verbal) are required Bachelors or masters degree in computer science or engineering Good Communication skills Kindly share below skill matrix for Java Developer along with the profile- skill Matrix Core Java (Mandatory), Multithreading, Spring, Messaging queue, Microservices ,SQL (Mandatory),Coding

Posted 3 weeks ago

Apply

7.0 - 11.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

As a skilled Data Engineer with 7-10 years of experience, you will be a valuable addition to our dynamic team in India. Your primary focus will involve designing and optimizing data pipelines to efficiently handle large datasets and extract valuable business insights. Your responsibilities will include designing, building, and maintaining scalable data pipelines and architecture. You will be expected to develop and enhance ETL processes for data ingestion and transformation, collaborating closely with data scientists and analysts to meet data requirements and deliver effective solutions. Monitoring data integrity through data quality checks and ensuring compliance with data governance and security policies will also be part of your role. Leveraging cloud-based data technologies and services for storage and processing will be crucial to your success in this position. To excel in this role, you should hold a Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Proficiency in SQL and practical experience with databases such as MySQL, PostgreSQL, or Oracle is essential. Your expertise in programming languages like Python, Java, or Scala will be highly valuable, along with hands-on experience in big data technologies like Hadoop, Spark, or Kafka. Familiarity with cloud platforms such as AWS, Azure, or Google Cloud is preferred. Understanding data warehousing concepts and tools such as Redshift and Snowflake, coupled with experience in data modeling and architecture design, will further strengthen your candidacy.,

Posted 3 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

The ideal candidate ready to join immediately can share their details via email for quick processing at nitin.patil@ust.com. Act fast for immediate attention! With over 5 years of experience, you will be responsible for designing, developing, and maintaining scalable data pipelines using Spark (PySpark or Spark with Scala). You will also build data ingestion and transformation frameworks for structured and unstructured data sources. Collaboration with data analysts, data scientists, and business stakeholders to understand requirements and deliver reliable data solutions will be a key aspect of the role. Working with large volumes of data, ensuring quality, integrity, and consistency, and optimizing data workflows for performance, scalability, and cost efficiency on cloud platforms (AWS, Azure, or GCP) are essential responsibilities. Additionally, implementing data quality checks and automation for ETL/ELT pipelines, monitoring and troubleshooting data issues in production, and performing root cause analysis will be part of your duties. You will also be expected to document technical processes, system designs, and operational procedures. Must-Have Skills: - Minimum 3 years of experience as a Data Engineer or in a similar role. - Hands-on experience with PySpark or Spark using Scala. - Strong knowledge of SQL for data querying and transformation. - Experience working with any cloud platform (AWS, Azure, or GCP). - Solid understanding of data warehousing concepts and big data architecture. - Familiarity with version control systems like Git. Good-to-Have Skills: - Experience with data orchestration tools such as Apache Airflow, Databricks Workflows, or similar. - Knowledge of Delta Lake, HDFS, or Kafka. - Familiarity with containerization tools like Docker/Kubernetes. - Exposure to CI/CD practices and DevOps principles. - Understanding of data governance, security, and compliance standards.,

Posted 3 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

noida, uttar pradesh

On-site

The ideal candidate for this position should have at least 5 years of experience and must be ready to join immediately. In this role, you will be responsible for designing, developing, and maintaining scalable data pipelines using Spark, specifically PySpark or Spark with Scala. You will also be tasked with building data ingestion and transformation frameworks for structured and unstructured data sources. Collaboration with data analysts, data scientists, and business stakeholders to understand requirements and deliver reliable data solutions is a key aspect of this role. Working with large volumes of data to ensure quality, integrity, and consistency is crucial. Additionally, optimizing data workflows for performance, scalability, and cost efficiency on cloud platforms such as AWS, Azure, or GCP is a significant part of the responsibilities. Implementing data quality checks and automation for ETL/ELT pipelines, monitoring and troubleshooting data issues in production, and performing root cause analysis are also essential tasks. Documentation of technical processes, system designs, and operational procedures is expected. The must-have skills for this role include at least 3 years of experience as a Data Engineer or in a similar role, hands-on experience with PySpark or Spark using Scala, strong knowledge of SQL for data querying and transformation, experience working with any cloud platform (AWS, Azure, or GCP), a solid understanding of data warehousing concepts and big data architecture, and experience with version control systems like Git. Good-to-have skills for this position include experience with data orchestration tools like Apache Airflow, Databricks Workflows, or similar, knowledge of Delta Lake, HDFS, or Kafka, familiarity with containerization tools such as Docker/Kubernetes, exposure to CI/CD practices and DevOps principles, and an understanding of data governance, security, and compliance standards. If you meet the qualifications and are interested in this exciting opportunity, please share your details via email at nitin.patil@ust.com for quick processing. Act fast to grab this immediate attention!,

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

faridabad, haryana

On-site

The project is a dynamic solution empowering companies to optimize promotional activities for maximum impact. It collects and validates data, analyzes promotion effectiveness, plans calendars, and integrates seamlessly with existing systems. The tool enhances vendor collaboration, negotiates better deals, and employs machine learning to optimize promotional plans, enabling companies to make informed decisions and maximize return on investment. The required technology stack includes Scala, Go-Lang, Docker, Kubernetes, Databricks, with Python as an optional skill. The working time zone for this position is EU, and the specialty sought is Data Science. The ideal candidate should have more than 5 years of experience and English Upper-Intermediate language proficiency. Key soft skills desired for this role include a preference for problem-solving style over experience, ability to clarify requirements with the customer, willingness to pair with other engineers when solving complex issues, and good communication skills. The essential hard skills required for this position are experience in Scala and/or Go for designing and building scalable high-performing applications, containerization and microservices orchestration using Docker and Kubernetes, building data pipelines and ETL solutions using Databricks, data storage and retrieval with PostgreSQL and Elasticsearch, deploying and maintaining solutions in the Azure cloud environment. Experience in Python is considered a nice-to-have skill. Responsibilities and tasks for this role include developing and maintaining distributed systems using Scala and/or Go, working with Docker and Kubernetes for containerization and microservices orchestration, building data pipelines and ETL solutions using Databricks, working with PostgreSQL and Elasticsearch for data storage and retrieval, and deploying and maintaining solutions in the Azure cloud environment.,

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Jobs 03/18/2020 Carmatec is looking for passionate DevOps Engineers to be a part of our InstaCarma team. Not only will you have the chance to make your mark as an established DevOps Engineer, but you will also get to work and interact with seasoned professionals deeply committed to revolutionize the Cloud scenario. Job Responsibilities Work on Infrastructure provisioning/configuration management too ls. We use Packer, Terraform and Chef. Develop automation tools/scripts. We use Bash/Python/Ruby Responsible for Continuous integration and artefact management. We use Jenkins and Artifactory Setup automated deployment pipelines for microservices running as Docker containers. Setup monitoring, alerting and metrics scraping for java/scala/play applications using Prometheus and Graylog2 integrated with PagerDuty and Hipchat for alerting,reporting and monitoring. Will be doing on-call Production support an d related Incident Management, reporting & Postmortem. Create runbooks, wikis for incidents, troubleshooting performed etc. Be a proactive member of your team by sharing knowledge. Resource scheduling,orchestration using Mesos/Marathon Work closely with development teams to ensure that platforms are designed with operability in mind Function well in a fast-paced, rapidly changing environment. Required Skills A basic understanding of DevOps tools and automation framework Outstanding organization, documentation, and communication skills. Must be skilled in Linux System Administration (Ubuntu/Centos) Knowledge of AWS is a must. (EC2, EBS, S3, Route53, Cloudfront, SG, IAM, RDS etc.) Strong foundation in Docker internals and troubleshooting. Should know at least one configuration management tool – Chef/Ansible/Puppet Good to have experience at least in one scripting language – Bash/Python/Ruby Experience is an at- least one NoSQL Database Systems is a plus. – Elasticsearch/Mongodb/Redis/Cassandra Experience in a CI tool like Jenkins is preferred. Good understanding of how a 3-tier architecture works. Basic knowledge in any revision control tools like Git/Subversion etc. Should have experience working with monitoring tools like Nagios, Newrelic etc. Should be proficient in log management using tools like rsyslog, logstash etc. Working knowledge of the following items – cron, haproxy/nginx, lvm, MySql, BIND (DN S), iptables. Experience in Atlassian Tools – Jira, Hipchat,Confluence will be a plus. Experience: 5+ years Location: Bangalore If the above description is of your interest, please revert to us with your updated resume to teamhr@carmatec.com Apply now Apply now

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About Us VE3 is at the forefront of delivering cloud‑native data solutions to premier clients across finance, retail and healthcare. As a rapidly growing UK‑based consultancy, we pride ourselves on fostering a collaborative, inclusive environment where every voice is heard—and every idea can become tomorrow’s breakthrough. Role: Database Designer / Senior Data Engineer What You’ll Do Architect & Design Lead the design of modern, scalable data platforms on AWS and/or Azure, using best practices for security, cost‑optimisation and performance. Develop detailed data models (conceptual, logical, physical) and document data dictionaries and lineage. Build & Optimize Implement robust ETL/ELT pipelines using Python, SQL, Scala (as appropriate), leveraging services such as AWS Glue, Azure Data Factory, and open‑source frameworks (Spark, Airflow). Tune data stores (RDS, SQL Data Warehouse, NoSQL like Redis) for throughput, concurrency and cost. Establish real‑time data streaming solutions via AWS Kinesis, Azure Event Hubs or Kafka. Collaborate & Deliver Work closely with data analysts, BI teams and stakeholders to translate business requirements into data solutions and dashboards. Partner with DevOps/Cloud Ops to automate CI/CD for data code and infrastructure (Terraform, CloudFormation). Governance & Quality Define and enforce data governance, security and compliance standards (GDPR, ISO27001). Implement monitoring, alerting and data quality frameworks (Great Expectations, AWS CloudWatch). Mentor & Innovate Act as a technical mentor for junior engineers; run brown‑bag sessions on new cloud services or data‑engineering patterns. Proactively research emerging big‑data and streaming technologies to keep our toolset cutting‑edge. Who You Are Academic Background: Bachelor’s (or higher) in Computer Science, Engineering, IT or similar. Experience: ≥3 years in a hands‑on Database Designer / Data Engineer role, ideally within a cloud environment. Technical Skills: Languages: Expert in SQL; strong Python or Scala proficiency. Cloud Services: At least one of AWS (Glue, S3, Kinesis, RDS) or Azure (Data Factory, Data Lake Storage, SQL Database). Data Modelling: Solid understanding of OLTP vs OLAP, star/snowflake schemas, normalization & denormalization trade‑offs. Pipeline Tools: Familiarity with Apache Spark, Kafka, Airflow or equivalent. Soft Skills: Excellent communicator—able to present complex technical designs in clear, non‑technical terms. Strong analytical mindset; thrives on solving performance bottlenecks and scaling challenges. Team player—collaborative attitude in agile/scrum settings. Nice to Have Certifications: AWS Certified Data Analytics – Specialty, Azure Data Engineer Associate/Expert. Exposure to data‑science workflows (Jupyter, ML pipelines). Experience with containerized workloads (Docker, Kubernetes) for data processing. Familiarity with DataOps practices and tools (dbt, Great Expectations, Terraform). Our Commitment to Diversity We’re an equal‑opportunity employer committed to inclusive hiring. All qualified applicants—regardless of ethnicity, gender identity, sexual orientation, neurodiversity, disability status or veteran status—are encouraged to apply.

Posted 3 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

As an Akka & Scala Engineer based in Pune, you will be an integral part of our client's backend engineering team. With 4 - 8 years of experience, you will play a key role in designing, developing, and implementing distributed, resilient, and reactive systems using Akka HTTP, Akka Streams, and the Akka Actor Model. Your responsibilities will include designing and developing backend services and APIs with Scala and Akka, architecting actor-based systems, and leveraging Akka Streams for real-time data processing. You will collaborate with cross-functional teams, participate in design discussions, and ensure the high availability and performance of the system. To excel in this role, you must possess strong experience in Scala, a deep understanding of the Akka toolkit, and proficiency in building REST APIs using Akka HTTP. Experience with reactive programming, microservices architecture, SQL/NoSQL databases, and tools like Docker, Kubernetes, and CI/CD pipelines is essential. Additionally, familiarity with Akka Persistence, Kafka, monitoring tools, and cloud platforms is a plus. Apart from technical skills, we value soft skills such as strong communication, problem-solving abilities, and a willingness to learn and adapt to new technologies. By joining us, you will have the opportunity to work with top-tier product and engineering teams, tackle complex technical challenges, and be part of a collaborative work culture. If you are passionate about leveraging your expertise in Akka and Scala to build cutting-edge systems and thrive in a dynamic environment, we encourage you to apply for this rewarding opportunity.,

Posted 3 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

As a Data Engineer, you will be responsible for designing, developing, and maintaining scalable data pipelines using Spark (PySpark or Spark with Scala). Your role will involve building data ingestion and transformation frameworks for structured and unstructured data sources. Collaborating with data analysts, data scientists, and business stakeholders to understand requirements and deliver reliable data solutions will be a key aspect of your responsibilities. Additionally, you will work with large volumes of data to ensure quality, integrity, and consistency, optimizing data workflows for performance, scalability, and cost efficiency on cloud platforms such as AWS, Azure, or GCP. Implementing data quality checks and automation for ETL/ELT pipelines, monitoring and troubleshooting data issues in production, and documenting technical processes, system designs, and operational procedures are also part of your duties. To excel in this role, you should have at least 3 years of experience as a Data Engineer or in a similar role. Hands-on experience with PySpark or Spark using Scala is essential, along with a strong knowledge of SQL for data querying and transformation. You should also have experience working with any cloud platform (AWS, Azure, or GCP), a solid understanding of data warehousing concepts and big data architecture, and familiarity with version control systems like Git. While not mandatory, it would be beneficial to have experience with data orchestration tools like Apache Airflow, Databricks Workflows, or similar, knowledge of Delta Lake, HDFS, or Kafka, familiarity with containerization tools such as Docker or Kubernetes, exposure to CI/CD practices and DevOps principles, and an understanding of data governance, security, and compliance standards. If you are ready to join immediately and possess the required skills and experience, please share your details via email at nitin.patil@ust.com. Act fast for immediate attention!,

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Greater Kolkata Area

On-site

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform, Microsoft Azure Databricks, PySpark Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are optimized for performance and usability. You will engage in problem-solving discussions, contribute to the overall project strategy, and adapt to evolving requirements while maintaining a focus on delivering high-quality applications that align with business objectives. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Assist in the documentation of application processes and workflows. - Engage in continuous learning to stay updated with the latest technologies and methodologies. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform, Microsoft Azure Databricks, PySpark. - Strong understanding of data integration techniques and ETL processes. - Experience with cloud-based data storage solutions and data management. - Familiarity with programming languages such as Python or Scala. - Ability to troubleshoot and optimize application performance. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Kolkata office. - A 15 years full time education is required., 15 years full time education

Posted 3 weeks ago

Apply

6.0 - 11.0 years

9 - 19 Lacs

Bengaluru

Hybrid

Lead : 6-8 years Focus on production cost for the techniques and features • Mentoring the team on benchmarking costs, performance KPI’s • Guarding the focus on the team towards objectives •Advanced proficiency in Python and/or Scala for data engineering tasks. •Proficiency in PySpark and Scala Spark for distributed data processing, with hands-on experience in Azure Databricks. •Expertise in Azure Databricks for data engineering, including Delta Lake, MLflow, and cluster management. •Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and their big data and data warehousing services (e.g., Azure Data Factory, AWS Redshift). •Expertise in data warehousing platforms such as Snowflake, Azure Synapse Analytics, or Redshift, including schema design, ETL/ELT processes, and query optimization. •Experience with Hadoop ecosystem (HDFS, Hive, HBase, etc.), Apache Airflow for workflow orchestration and scheduling. •Advanced knowledge of SQL for data warehousing and analytics, with experience in NoSQL databases (e.g., MongoDB) as a plus. •Experience with version control systems (e.g., Git) and CI/CD pipelines. •Familiarity with Java or other programming languages is a plus.

Posted 3 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

Pune, Maharashtra, India

On-site

The Applications Development Senior Manager is a senior management level position responsible for accomplishing results through the management of a team or department in an effort to establish and implement new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to drive applications systems analysis and programming activities. Responsibilities: Manage one or more Applications Development teams in an effort to accomplish established goals as well as conduct personnel duties for team (e.g. performance evaluations, hiring and disciplinary actions) Utilize in-depth knowledge and skills across multiple Applications Development areas to provide technical oversight across systems and applications Review and analyze proposed technical solutions for projects Contribute to formulation of strategies for applications development and other functional areas Develop comprehensive knowledge of how areas of business integrate to accomplish business goals Provide evaluative judgment based on analysis of factual data in complicated and unique situations Impact the Applications Development area through monitoring delivery of end results, participate in budget management, and handling day-to-day staff management issues, including resource management and allocation of work within the team/project Ensure essential procedures are followed and contribute to defining standards negotiating with external parties when necessary Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency, as well as effectively supervise the activity of others and create accountability with those who fail to maintain these standards. Core Responsibilities: This role is for a Data Engineering Tech Lead to work on the Vanguard Big Data Platform. The team is responsible for the design, architecture, development, and maintenance/support of leading Big Data initiatives and use cases providing business value. Interface with product teams to understand their requirements to build the ingestion pipelines and conformance layer for consumption by business. Work closely with the data ingestion team to track the requirements and drive the build out of the canonical models. Provide guidance to the data conformance team for implementing the requirements / changes / enhancements to the conformance model. Do very much technical and hands-on development as part of the conformance team to deliver the business requirements. Manage the workload of the team and the scrum process to align it with the objectives and priorities of the product owners. Participate in data management activities related to the Risk and Regulatory requirements as needed. Core Skills: The Data Engineering lead will be working very closely with and managing the work of a team of data engineers working on our Big Data Platform. The lead will need the below core skills – Strong solid understanding of the Big Data architecture and the ability to trouble shoot performance and/or development issues on Hadoop (Cloudera preferably) Hands-on experience working with Hive, Impala, Kafka, HBase, Spark for data curation/conformance related work. Strong proficiency in Spark for development work related to curation/conformance. Strong Scala development (with previous Java background) preferred. Experience with Spark/Kafka or equivalent streaming/batch processing and event-based messaging. Strong data analysis skills and the ability to slice and dice the data as needed for business reporting. Experience working in an agile environment with a fast-paced changing requirement. Excellent planning and organizational skills Strong Communication skills Additional Requirements (Nice to have): Cloudera/Hortonworks/AWS EMR, Couchbase, S3 experience a plus Experience with Cloud Integration on AWS, Snowflake, Couchbase, or GCP tech stack components. Relational SQL (Oracle, SQL Server) and NoSQL (MongoDB) database integration and data distribution principles experience Experience with API development and use of JSON/XML/Hypermedia data formats. Analysis and development across Lines of business product/function including Payments, Digital Channels, Liquidities, Trade, Sales, Pricing, Client Experience having Cross train, functional and/or technical knowledge Align to Engineering Excellence Development principles and standards. Qualifications: 8-12 years of relevant experience in the Financial Service industry Experience as Applications Development Manager Experience as senior level in an Applications Development role Stakeholder and people management experience Demonstrated leadership skills Proven project management skills Basic knowledge of industry practices and standards Consistently demonstrates clear and concise written and verbal communication Education: Bachelor’s degree/University degree or equivalent experience Master’s degree preferred This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.

Posted 3 weeks ago

Apply

5.0 - 8.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Role Purpose The purpose of this role is to design, test and maintain software programs for operating systems or applications which needs to be deployed at a client end and ensure its meet 100% quality assurance parameters Do 1. Instrumental in understanding the requirements and design of the product/ software Develop software solutions by studying information needs, studying systems flow, data usage and work processes Investigating problem areas followed by the software development life cycle Facilitate root cause analysis of the system issues and problem statement Identify ideas to improve system performance and impact availability Analyze client requirements and convert requirements to feasible design Collaborate with functional teams or systems analysts who carry out the detailed investigation into software requirements Conferring with project managers to obtain information on software capabilities 2. Perform coding and ensure optimal software/ module development Determine operational feasibility by evaluating analysis, problem definition, requirements, software development and proposed software Develop and automate processes for software validation by setting up and designing test cases/scenarios/usage cases, and executing these cases Modifying software to fix errors, adapt it to new hardware, improve its performance, or upgrade interfaces. Analyzing information to recommend and plan the installation of new systems or modifications of an existing system Ensuring that code is error free or has no bugs and test failure Preparing reports on programming project specifications, activities and status Ensure all the codes are raised as per the norm defined for project / program / account with clear description and replication patterns Compile timely, comprehensive and accurate documentation and reports as requested Coordinating with the team on daily project status and progress and documenting it Providing feedback on usability and serviceability, trace the result to quality risk and report it to concerned stakeholders 3. Status Reporting and Customer Focus on an ongoing basis with respect to project and its execution Capturing all the requirements and clarifications from the client for better quality work Taking feedback on the regular basis to ensure smooth and on time delivery Participating in continuing education and training to remain current on best practices, learn new programming languages, and better assist other team members. Consulting with engineering staff to evaluate software-hardware interfaces and develop specifications and performance requirements Document and demonstrate solutions by developing documentation, flowcharts, layouts, diagrams, charts, code comments and clear code Documenting very necessary details and reports in a formal way for proper understanding of software from client proposal to implementation Ensure good quality of interaction with customer w.r.t. e-mail content, fault report tracking, voice calls, business etiquette etc Timely Response to customer requests and no instances of complaints either internally or externally Mandatory Skills: Scala programming. Experience: 5-8 Years.

Posted 3 weeks ago

Apply

4.0 - 6.0 years

6 - 10 Lacs

Chennai

Work from Office

As a Senior Cloud Data Platform (AWS) Specialist at Incedo, you will be responsible for designing, deploying and maintaining cloud-based data platforms on the AWS platform. You will work with data engineers, data scientists and business analysts to understand business requirements and design scalable, reliable and cost-effective solutions that meet those requirements. Roles & Responsibilities: Designing, developing and deploying cloud-based data platforms using Amazon Web Services (AWS) Integrating and processing large amounts of structured and unstructured data from various sources Implementing and optimizing ETL processes and data pipelines Developing and maintaining security and access controls Collaborating with other teams to ensure the consistency and integrity of data Troubleshooting and resolving data platform issues Technical Skills Skills Requirements: In-depth knowledge of AWS services and tools such as AWS Glue, AWS Redshift, and AWS Lambda Experience in building scalable and reliable data pipelines using AWS services, Apache Spark, and related big data technologies Familiarity with cloud-based infrastructure and deployment, specifically on AWS Strong knowledge of programming languages such as Python, Java, and SQL Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Provide leadership, guidance, and support to team members, ensuring the successful completion of tasks, and promoting a positive work environment that fosters collaboration and productivity, taking responsibility of the whole team. Nice-to-have skills Qualifications Qualifications 4-6 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred

Posted 3 weeks ago

Apply

5.0 - 8.0 years

3 - 7 Lacs

Hyderabad

Work from Office

Long Description Experienceand Expertise inany of the followingLanguagesat least 1 of them : Java, Scala, Python Experienceand expertise in SPARKArchitecture Experience in the range of 6-10 yrs plus Good Problem SolvingandAnalytical Skills Ability to Comprehend the Business requirementand translate to the Technical requirements Good communicationand collaborative skills with fellow teamandacross Vendors Familiar with development of life cycle includingCI/CD pipelines. Proven experienceand interested in supportingexistingstrategicapplications Familiarity workingwithagile methodology Mandatory Skills: Scala programming.: Experience: 5-8 Years.

Posted 3 weeks ago

Apply

5.0 - 8.0 years

6 - 10 Lacs

Pune

Hybrid

Mandatory Skills: Cloud-PaaS-GCP-Google Cloud Platform . Location: Wipro PAN India Hybrid 3 days in Wipro office JD: Strong - SQL Strong - Python Any cloud technology (AWS, azure, GCP etc) have to be excellent GCP (preferred) PySpark (preferred) Essential Skills: Proficiency in Cloud-PaaS-GCP-Google Cloud Platform. Experience Required: 5-8 years. Position: Cloud Data Engineer. Work Location: Wipro, PAN India. Work Arrangement: Hybrid model with 3 days in Wipro office. Additional Experience: 8-13 years. Job Description: - Strong expertise in SQL. - Proficient in Python. - Excellent knowledge of any cloud technology (AWS, Azure, GCP, etc.), with a preference for GCP. - Familiarity with PySpark is preferred. Mandatory Skills: Cloud-PaaS-GCP-Google Cloud Platform . JD: Strong - SQL Strong - Python Any cloud technology (AWS, azure, GCP etc) have to be excellent GCP (preferred) PySpark (preferred

Posted 3 weeks ago

Apply

5.0 - 10.0 years

7 - 11 Lacs

Bengaluru

Work from Office

Senior Software Engineer Data Were seeking a Senior Software Engineer or a Lead Software Engineer to join one of our Data Layer teams. As the name implies, the Data Layer is at the core of all things data at Zeta. Our responsibilities include: Developing and maintaining the Zeta Identity Graph platform, which collects billions of behavioural, demographic, locations and transactional signals to power people-based marketing. Ingesting vast amounts of identity and event data from our customers and partners. Facilitating data transfers across systems. Ensuring the integrity and health of our datasets. And much more. As a member of this team, the data engineer will be responsible for designing and expanding our existing data infrastructure, enabling easy access to data, supporting complex data analyses, and automating optimization workflows for business and marketing operations. Essential Responsibilities: As a Senior Software Engineer or a Lead Software Engineer, your responsibilities will include: Building, refining, tuning, and maintaining our real-time and batch data infrastructure Daily use technologies such as Spark, Airflow, Snowflake, Hive, Scylla, Django, FastAPI, etc. Maintaining data quality and accuracy across production data systems Working with Data Engineers to optimize data models and workflows Working with Data Analysts to develop ETL processes for analysis and reporting Working with Product Managers to design and build data products Working with our DevOps team to scale and optimize our data infrastructure Participate in architecture discussions, influence the road map, take ownership and responsibility over new projects Participating in on-call rotation in their respective time zones (be available by phone or email in case something goes wrong) Desired Characteristics: Minimum 5-10 years of software engineering experience. Proven long term experience and enthusiasm for distributed data processing at scale, eagerness to learn new things. Expertise in designing and architecting distributed low latency and scalable solutions in either cloud and on-premises environment. Exposure to the whole software development lifecycle from inception to production and monitoring Fluency in Python or solid experience in Scala, Java Proficient with relational databases and Advanced SQL Expert in usage of services like Spark and Hive Experience with web frameworks such as Flask, Django Experience in adequate usage of any scheduler such as Apache Airflow, Apache Luigi, Chronos etc. Experience in Kafka or any other stream message processing solutions. Experience in adequate usage of cloud services (AWS) at scale Experience in agile software development processes Excellent interpersonal and communication skills Nice to have: Experience with large scale / multi-tenant distributed systems Experience with columnar / NoSQL databases Vertica, Snowflake, HBase, Scylla, Couchbase Experience in real team streaming frameworks Flink, Storm Experience in open table formats such as Iceberg, Hudi or Deltalake

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies