Jobs
Interviews

18519 Tuning Jobs - Page 35

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

12.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Summary: We are seeking an experienced Database Lead with a strong background in MS SQL Server (L4 Architect level) and working knowledge of Oracle (L3) . Experience in PostgreSQL will be considered a plus. This role demands excellent communication skills and proven experience in leading and mentoring database teams. You will be responsible for architecting, optimizing, and managing critical database systems that support enterprise-level applications. Key Responsibilities: Lead the design, implementation, and maintenance of scalable and high-performing database solutions primarily using MS SQL Server . Provide architectural guidance on database design, performance tuning, and capacity planning. Act as the subject matter expert (SME) for MS SQL Server at an architect level. Support and maintain Oracle databases at L3 support level. Provide direction and recommendations on PostgreSQL if/when required. Mentor and manage a team of 4+ database administrators, fostering collaboration and growth. Establish best practices for database development, deployment, and maintenance. Collaborate with cross-functional teams including development, infrastructure, and application support. Ensure data integrity, security, and availability across all managed database platforms. Participate in on-call support rotation and manage incident resolution in a timely manner. Required Skills & Qualifications: 12+ years of overall experience in database administration and architecture. MS SQL Server (L4 / Architect Level) : Extensive hands-on experience in architecture, clustering, replication, performance tuning, and high availability. Oracle (L3 Support Level) : Solid experience in installation, backup & recovery, and performance optimization. Exposure to PostgreSQL environments is a strong plus. Strong understanding of database security, backup, and disaster recovery solutions. Experience leading and mentoring teams for 4+ years . Excellent verbal and written communication skills. Ability to work in a fast-paced, collaborative environment

Posted 5 days ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title: MSSQL DBA with AWS/Azure Expertise Job Description: We are seeking a skilled MSSQL Database Administrator (DBA) with expertise in Amazon Web Services (AWS)/Azure to join our team. The ideal candidate will be responsible for managing, maintaining, and optimizing our MSSQL databases hosted on AWS/Azure, ensuring high availability, performance, and security. Key Responsibilities: Database Management: Install, configure, and maintain MSSQL Server databases in AWS/Azure environments. Monitor database performance and implement tuning and optimization strategies. Backup and Recovery: Develop and implement backup and recovery strategies to ensure data integrity and availability. Perform regular database backups and restore operations as needed. Performance Tuning: Analyze and optimize SQL queries and database structures for improved performance. Identify and resolve performance bottlenecks. AWS/Azure Integration: Utilize AWS services such as RDS, EC2, and S3 for database hosting and management. Implement and manage database replication and clustering in AWS. Troubleshooting: Diagnose and resolve database-related issues and incidents in a timely manner. Collaborate with development teams to troubleshoot application-related database issues. PoC Development: Lead the design and implementation of proof of concept projects to validate new ideas and technologies on AWS/Azure. Collaborate with stakeholders to define project objectives, scope, and success criteria for PoCs. Technical Expertise: Provide expert guidance on AWS/Azure services, architecture, and best practices to ensure optimal solutions. Evaluate and recommend appropriate AWS/Azure services and tools based on project requirements. Documentation and Reporting: Document PoC findings, including architecture diagrams, implementation details, and performance metrics. Present results and recommendations to stakeholders, highlighting potential benefits and challenges. Mentorship: Mentor and train team members on AWS/Azure best practices and PoC methodologies. Foster a culture of innovation and experimentation within the team. Qualifications: Bachelor’s degree in Computer Science, Information Technology, or a related field. Proven experience as an MSSQL DBA, with a strong understanding of database design and management. Proven experience with AWS/Azure services, particularly RDS, DRS, S3, DevOps, and EC2. AWS/Azure Certified Database – Specialty or similar certification. Strong problem-solving skills and the ability to work under pressure.

Posted 5 days ago

Apply

5.0 - 12.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Data Software Engineer Chennai & Coimbatore Walkin in on 2 Aug 25 Hybrid Role 5-12 Years of in Big Data & Data related technology experience  Expert level understanding of distributed computing principles  Expert level knowledge and experience in Apache Spark  Hands on programming with Python  Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop  Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming  Good understanding of Big Data querying tools, such as Hive, and Impala  Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files  Good understanding of SQL queries, joins, stored procedures, relational schemas  Experience with NoSQL databases, such as HBase, Cassandra, MongoDB  Knowledge of ETL techniques and frameworks  Performance tuning of Spark Jobs  Experience with AZURE Databricks  Ability to lead a team efficiently  Experience with designing and implementing Big data solutions  Practitioner of AGILE methodology

Posted 5 days ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Exp : 4yrs to 8yrs Key requirements: Managing database servers, storage, and network components. Conducting regular database health checks and implementing proactive measures. Responsibilities include designing, installing, upgrading, patching, tuning, monitoring and troubleshooting databases with large amounts of data (Terabytes) and high transaction rates. Experience with DataGuard setup and management Experience in defining security standards and supporting security audits. Experience with Perl, Unix Shell Scripting, and Python for custom monitoring and automation · Experience working with the development team during design sessions to provide performance and scaling inputs Experience with troubleshooting problems relating to the Oracle database that may originate or exist in the infrastructure that supports the data store including application server, network, storage, and operating system component/layer · Exadata feature enablement, monitoring, review and update placement strategy · Working knowledge on Exadata 7 and above platforms. Monitor resource utilization and Oracle Compute Unit (OCPU) enablement · Good exposure and hands-on with Oracle ExaCC (Exadata Cloud @ Customer) architecture, managing and creating resources using cloud dashboard · Create/manage Oracle SR and support team in issue resolution Strong experience in Oracle Core Database administration and monitoring including · Exposure to Performance of Exadata Experience with migration of Oracle Databases from On-premise to Oracle Exadata Cloud at Customer Experience with Oracle Enterprise Manager (OEM). Experience with Patch/Upgrade and release management (database, storage node, server, network, Oracle Exadata) · Develop and maintain system documentation related to your Oracle Exadata and software configuration · Apply process and architectural improvements to continually improve the availability, capacity, and performance of database systems · Communicate appropriately and efficiently with management, customers, and vendors · Sustain a team-oriented, fast-paced environment · On-call rotation is required to support a 24/7 environment and is also expected to be able to work outside business hours to support a global delivery. Candidate must be capable of providing off-hours support to avoid impacting database availability during normal business hours. Experience with RMAN database backup and recovery using ZDLRA, Proficient in using real-time monitoring tools like Grafana. Background: The candidate should be degree educated and needs to have at least 6 plus years solid experience as Production Support DBA. Added advantage to German Speaking candidates Technology buzzwords: Monitoring and reporting tool. Tools: Tivoli performance viewer, Remedy, Word, Visio, Excel etc. Server AIX, Solaris, Linux & Windows Server Knowledge of Backup products like TSM, NetBackup, Networker. Database stacks like Oracle, MS-SQL, Informix Web & Middleware stacks like WebLogic Application server, MQ, Apache, Tomcat, and IIS.

Posted 5 days ago

Apply

5.0 years

0 Lacs

India

On-site

DDN Storage is seeking great candidates to join our dynamic team of passionate customer-enabling technologists! This is an incredible opportunity to be part of a company that has been at the forefront of AI and high-performance data storage innovation for over two decades. DDN Storage is a global market leader renowned for powering many of the world's most demanding AI data centers, in industries ranging from life sciences and healthcare to financial services, autonomous cars, Government, academia, research and manufacturing. "DDN's A3I solutions are transforming the landscape of AI infrastructure." – IDC “The real differentiator is DDN. I never hesitate to recommend DDN. DDN is the de facto name for AI Storage in high performance environments” - ~ Marc Hamilton VP, Solutions Architecture & Engineering | NVIDIA DDN Storage is the global leader in AI and multi-cloud data management at scale. Our cutting-edge storage and data management solutions are designed to accelerate AI workloads, enabling organizations to extract maximum value from their data. With a proven track record of performance, reliability, and scalability, DDN Storage empowers businesses to tackle the most challenging AI and data-intensive workloads with confidence. Our success is driven by our unwavering commitment to innovation, customer-centricity, and a team of passionate professionals who bring their expertise and dedication to every project. This is a chance to make a significant impact at a company that is shaping the future of AI and data management. Our commitment to innovation, customer success, and market leadership makes this an exciting and rewarding role for a driven professional looking to make a lasting impact in the world of AI and data storage. Job Summary: We are looking for a Senior Development Engineer for our team, which focuses on creating storage solutions for the most data-intensive workloads. The role will focus on the EMF framework which handles deployment, management and monitoring platform for an HPC Filesystem. EMF is written fully in Rust. The ideal candidate will have experience designing, implementing, and shipping software using modern development tooling and practices. Responsibilities for this role include but are not limited to: · Software design and development for new features and maintenance of existing features. · Analysis of bug reports and development of software fixes on multiple platforms. · Work with the Engineering manager and a geographically distributed team to understand product requirements and features. · Contribute to and validate product documentation. · Assist with performance tuning of features for specific environments and use-cases. Qualifications: · BS/MS in Computer Science, Computer Engineering or equivalent degree/experience. · 5+ years of software development experience in Linux environments with Rust, Bash scripting. · 5+ years of modern full-stack software development experience including WebAssembly, CSS and HTML preferred. · Experience with JIRA, Jenkins, Gerrit, Git, and Github preferred. · Prior experience working with enterprise-class or HPC storage systems and/or distributed systems a bonus Our team is highly motivated and focused on engineering excellence. We look for individuals who appreciate challenging themselves and thrive on curiosity. Engineers are encouraged to work across multiple areas of the company. We operate with a flat organizational structure. All employees are expected to be hands-on and to contribute directly to the company’s mission. Leadership is given to those who show initiative and consistently deliver excellence. Work ethic and strong prioritization skills are important. All engineers and researchers are expected to have strong communication skills. They should be able to concisely and accurately share knowledge with their teammates. DataDirect Networks, Inc. is an Equal Opportunity/Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity, gender expression, transgender, sex stereotyping, sexual orientation, national origin, disability, protected Veteran Status, or any other characteristic protected by applicable federal, state, or local law.

Posted 5 days ago

Apply

5.0 years

0 Lacs

India

Remote

Location : Remote (India only) Type : Full-Time Experience : 5+ years in SOC/NOC environments Company : Symosis Security About the Role Symosis Security is hiring experienced Tier 2 SOC Analysts to support our 24x7 Managed Security Services delivery. You’ll play a critical role in triaging and responding to security alerts, conducting threat investigations, and continuously tuning detection logic using CrowdStrike, InsightIDR, and InsightVM. Key Responsibilities Analyze and triage alerts in InsightIDR and CrowdStrike Falcon Perform initial and secondary investigation of potential threats Correlate events, enrich alerts with contextual data, and escalate as needed Tune detection rules, reduce false positives, and document response playbooks Conduct threat hunting and IOC enrichment based on evolving threat intel Track vulnerability findings and coordinate with the vulnerability management process Collaborate closely with U.S.-based SOC Manager and global analyst team Support onboarding, shift handoffs, and continuous improvement of SOC workflows Requirements 5+ years of experience in a SOC, NOC, or MSSP environment Strong working knowledge of EDR , SIEM , and VM tools — ideally CrowdStrike , InsightIDR , and InsightVM and ServiceNow Familiarity with NIST, MITRE ATT&CK, and common alert triage frameworks Strong documentation, incident reporting, and communication skills Willingness to work night or rotating shifts to support U.S. client coverage Tool certifications (CrowdStrike, Rapid7) preferred or achievable within 4 weeks

Posted 5 days ago

Apply

0 years

0 Lacs

India

Remote

About The Opportunity HoGo Fresh is an agri-tech company on a mission to build a cleaner, more efficient farm-to-consumer supply chain. We connect farmers directly to markets using technology, traceability, and sustainable logistics. Our brand stands for transparency, freshness, and impact. Role & Responsibilities Collaborate with product and design teams to develop responsive web pages using HTML5, CSS3, and JavaScript (ES6+). Implement interactive UI components and manage application state with React.js. Optimize website performance and ensure cross-browser compatibility on desktop and mobile. Write clean, modular code, maintain documentation, and author unit tests. Assist in debugging, troubleshooting, and performance tuning in a fast-paced environment. Participate in code reviews and contribute to continuous integration workflows. Skills & Qualifications Must-Have Pursuing a Bachelor’s degree in Computer Science, Engineering, or related field. Proficiency in HTML5, CSS3, and JavaScript (ES6+). Basic experience with React.js and Git-based version control. Preferred Understanding of RESTful APIs and asynchronous data fetching (AJAX/Fetch). Exposure to Node.js or front-end build tools (Webpack, Babel). Familiarity with responsive design and core UI/UX principles. Benefits & Culture Highlights Flexible remote work schedule with mentorship from senior engineers. Hands-on learning in a fast-paced, innovation-driven environment. Access to training resources and potential for full-time placement upon successful internship. Skills: restful apis,responsive design,css3,node.js,react.js,babel,html5,git,ajax/fetch,webpack,ui/ux principles,javascript (es6+)

Posted 5 days ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

MicroStrategy Senior Developer Experience - 7–10 years Description - 7-10 years of experience in designing and building BI solutions. Must have expertise in MicroStrategy Desktop/Web/Server, strong SQL and data modeling skills, and a working knowledge of AWS Redshift functions. Experience in dashboard/report development, data integration, and performance tuning is essential. Key Skills: MicroStrategy (Desktop, Web, Intelligence Server, Mobile) SQL, Data Modeling (Dimensional), Data Integration Report & Dashboard Development, Performance Optimization AWS Redshift (functions, integration) Strong analytical and communication skills Preferred: Experience with Power BI and ability to switch between tools MicroStrategy certifications

Posted 5 days ago

Apply

0.0 - 3.0 years

4 - 8 Lacs

Domlur, Bengaluru, Karnataka

On-site

Job title - Data Analyst Notice period – Immediate to 1 month of Notice Job location – Bangalore Work culture -Hybrid work culture (Tue, wed, Thur -In office days) Mandatory Skills required – Looker with Min of 4-year hands on exp Total year of Experience - 3 to 5 Key Responsibilities A LookML Specialist with client handling experience is a professional who not only possesses expertise in LookML (Looker's modeling language) for data modeling and analysis but also excels at communicating with and managing client relationships related to data insights and reporting. This role involves understanding client needs, translating them into effective LookML code for data models, and then presenting and explaining these models and resulting data visualizations to clients. Develop reusable LookML components to standardize and optimize dashboard and report creation, ensuring consistency and efficiency . Design and develop insightful Looker Studio dashboards that effectively communicate key performance indicators (KPIs) and customer behavior trends. Collaborate with program, UX, and product management teams to translate data insights into actionable strategies for optimizing the customer journey. Design and customize interactive dashboards with data visualizations (charts, graphs) to enhance insight presentation and facilitate data-driven decision-making. Write and optimize complex SQL queries for data extraction, transformation, and performance improvement in BigQuery. Integrate Looker with various data sources, including BigQuery, Cloud Storage, and external APIs, ensuring seamless data connectivity. Identify and resolve performance bottlenecks in Looker by tuning queries, optimizing caching strategies, and implementing indexing options. Configure user roles and permissions within Looker, enforcing data security best practices such as row-level and field-level security. Develop Python-based automation scripts interacting with Looker's API for workflow automation and integration with other tools. Use version control for managing LookML code changes, and collaborating with other developers. Implement and maintain a comprehensive GA4 tracking plan, ensuring accurate data collection and set up in a standardized, repeatable template Generate weekly email reports that summarize key metrics and provide actionable insights for stakeholders. Job Type: Contractual / Temporary Contract length: 4 months Experience: Looker analytics: 4 years (Preferred) LookML : 3 years (Preferred) Looker Integration with 3rd party API : 3 years (Preferred) Work Location: In person Job Type: Contractual / Temporary Contract length: 4 months Pay: ₹427,548.90 - ₹815,355.47 per year Ability to commute/relocate: Domlur, Bengaluru, Karnataka: Reliably commute or planning to relocate before starting work (Preferred) Experience: LookML : 3 years (Preferred) Client handling / Client Demo: 3 years (Preferred) Requirement gathering from Client: 3 years (Preferred) Work Location: In person

Posted 5 days ago

Apply

1.0 years

0 Lacs

Gurugram, Haryana, India

On-site

We are seeking a skilled and motivated Oracle APEX Developer with over 1 year of experience to join our team in Gurgaon. The ideal candidate will possess a strong background in Oracle technologies and a passion for building efficient and user-friendly applications. This is a full-time, work-from-office role. Key Responsibilities: Design, develop, and maintain applications using Oracle APEX . Write efficient and optimized SQL and PL/SQL queries. Ensure performance tuning, database integrity, and table maintenance. Work on support, maintenance, and enhancement of existing applications. Collaborate with cross-functional teams to understand requirements and deliver solutions. Troubleshoot and resolve technical issues in a timely manner. Required Skills: 3+ years of hands-on experience with Oracle APEX , Oracle SQL, and PL/SQL. Proficiency in Oracle database versions 10g and 11g. Working knowledge of HTML, JavaScript, and AJAX. Strong analytical and problem-solving skills. Willingness to work on support as well as development/enhancement projects. Preferred Attributes: Strong communication skills. Ability to work independently as well as part of a team. Attention to detail and commitment to high-quality work.

Posted 5 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

About Company : Our client is prominent Indian multinational corporation specializing in information technology (IT), consulting, and business process services and its headquartered in Bengaluru with revenues of gross revenue of ₹222.1 billion with global work force of 234,054 and listed in NASDAQ and it operates in over 60 countries and serves clients across various industries, including financial services, healthcare, manufacturing, retail, and telecommunications. The company consolidated its cloud, data, analytics, AI, and related businesses under the tech services business line. Major delivery centers in India, including cities like Chennai, Pune, Hyderabad, and Bengaluru, kochi, kolkatta, Noida. · Job Title: SAP Basis Consultant · Location: Chennai · Experience: 7+ yrs · Job Type : Contract to hire. · Notice Period:- Immediate joiners. JD: Job Summary: We are seeking an experienced Senior SAP Basis Consultant specialized in Rise with SAP environments to lead the deployment, migration, and ongoing support of our SAP S/4HANA cloud-based landscape. The ideal candidate will possess extensive experience in SAP cloud infrastructure, SAP BTP integration, and SAP S/4HANA cloud/on-premise hybrid landscapes, ensuring optimized performance, security, and stability. Key Responsibilities: **Rise with SAP Implementation & Migration:** Lead end-to-end migration projects to SAP S/4HANA Cloud or hybrid Rise environments, including system conversion, landscape transformation, and integration. **Cloud Infrastructure Management:** Manage SAP S/4HANA in Rise environments, including SAP BTP, SAP Cloud Connector, and cloud hosting providers (AWS, Azure, SAP Cloud Platform). **System Administration & Support:** Ensure optimal configuration, monitoring, and tuning of SAP S/4HANA Cloud/on-premise hybrid landscapes. **Security & Compliance:** Implement security best practices, role management, and compliance standards in cloud environments. **Backup & Disaster Recovery:** Define and execute cloud-based backup, DR, and high-availability strategies. **Upgrade & Patch Management:** Support SAP S/4HANA cloud updates, SAP kernel patches, and related components. **Integration & Connectivity:** Manage SAP Cloud Connector, SAP API Management, and BTP integrations. **Performance Tuning:** Monitor and optimize system performance, utilizing SAP cloud monitoring tools and SAP Solution Manager. **Documentation & Governance:** Maintain comprehensive documentation, system configurations, and compliance records. **Cross-team Collaboration:** Work with SAP Functional teams, Cloud Operations, Infrastructure teams, and Business stakeholders. Preferred: - Experience with SAP S/4HANA Cloud, SAP BTP, SAP Cloud ALM, and hybrid cloud/on-premise landscapes. - Knowledge of DevOps practices, CI/CD pipelines in SAP context. - Familiarity with migration tools like SAP Cloud Migration Cockpit.

Posted 5 days ago

Apply

0.0 - 6.0 years

10 - 15 Lacs

Gurugram, Haryana

On-site

Job Title: Trend Micro Support Engineer Location: Gurgaon Experience: 6 to 8 Years Note: Must have experience with Trend Micro V1 (AV+EDR), TM E-mail Security and TM CREM. Key Responsibilities: Day-to-day monitoring, troubleshooting, and maintenance of Trend Micro products (Vision One, CREM, E-mail security etc.) Threat detection and remediation with complete ownership Regular Policy review, re-configuration and fine tuning as per industry best practices along with OEM and Japan insights Regular Patch and signature updates Coordination with Trend Micro TAM and support team for critical & escalated issues then apply the suggested fix Log review and daily/weekly/monthly report generation then share with respective operating companies. Do setup meetings for further explanation/action until remediation & final issue closure. Maintain Weekly status reports, Incident and request resolution logs & Configuration change documentation. Required Skills & Qualifications: Certified and experienced in Trend Micro endpoint and server security solutions Minimum 6–8 years of relevant experience Should have exposure to handling mid-to-large enterprise environments Preferred Certifications: Trend Micro Certified Professional for Vision One / XDR Trend Micro Certified Professional for CREM Job Types: Full-time, Permanent Pay: ₹1,000,000.00 - ₹1,500,000.00 per year Experience: Trend Micro Security: 6 years (Required) Trend Micro Vision One (AV + EDR): 6 years (Required) Trend Micro Email Security: 6 years (Required) Trend Micro Cloud App Security (CREM): 6 years (Required) Location: Gurgaon, Haryana (Required) Work Location: In person

Posted 5 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Work Level : Individual Core : Responsible Leadership : Team Alignment Industry Type : Information Technology Function : Database Administrator Key Skills : mSQL,SQL Writing,PLSQL Education : Graduate Note: This is a requirement for one of the Workassist Hiring Partner. Primary Responsibility: Collect, clean, and analyze data from various sources. Assist in creating dashboards, reports, and visualizations. We are looking for a SQL Developer Intern to join our team remotely. As an intern, you will work with our database team to design, optimize, and maintain databases while gaining hands-on experience in SQL development. This is a great opportunity for someone eager to build a strong foundation in database management and data analysis. Responsibilities Write, optimize, and maintain SQL queries, stored procedures, and functions. This is a Remote Position. Assist in designing and managing relational databases. Perform data extraction, transformation, and loading (ETL) tasks. Ensure database integrity, security, and performance. Work with developers to integrate databases into applications. Support data analysis and reporting by writing complex queries. Document database structures, processes, and best practices. Requirements Currently pursuing or recently completed a degree in Computer Science, Information Technology, or a related field. Strong understanding of SQL and relational database concepts. Experience with databases such as MySQL, PostgreSQL, SQL Server, or Oracle. Ability to write efficient and optimized SQL queries. Basic knowledge of indexing, stored procedures, and triggers. Understanding of database normalization and design principles. Good analytical and problem-solving skills. Ability to work independently and in a team in a remote setting. Preferred Skills (Nice to Have) Experience with ETL processes and data warehousing. Knowledge of cloud-based databases (AWS RDS, Google BigQuery, Azure SQL). Familiarity with database performance tuning and indexing strategies. Exposure to Python or other scripting languages for database automation. Experience with business intelligence (BI) tools like Power BI or Tableau. Company Description Workassist is an online recruitment and employment solution platform based in Lucknow, India. We provide relevant profiles to employers and connect job seekers with the best opportunities across various industries. With a network of over 10,000+ recruiters, we help employers recruit talented individuals from sectors such as Banking & Finance, Consulting, Sales & Marketing, HR, IT, Operations, and Legal. We have adapted to the new normal and strive to provide a seamless job search experience for job seekers worldwide. Our goal is to enhance the job seeking experience by leveraging technology and matching job seekers with the right employers. For a seamless job search experience, visit our website: https://bit.ly/3QBfBU2 (Note: There are many more opportunities apart from this on the portal. Depending on the skills, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 5 days ago

Apply

0.0 - 7.0 years

15 - 20 Lacs

Pune, Maharashtra

On-site

Position Title Senior Developer/ LEAD DEVELOPER Primary Skills C#, Entity Framework., ASP.NET Core API, AWS Cloud (SQS, SNS, Lambda, Step Machine,ECS,ECR,Event Bridge), MySQL, GIT Secondary Skills ReactJs, Elastic search, SQL Kata, Octopus deploy, Jenkin , Code Build, Code pipeline Experience 7 to 10 years REQUIRED KNOWLEDGE 1. Strong understanding of Object-Oriented Programming 2. Solid development experience in C#, MVC, Entity Framework., ASP.NET Core API, MySQL, Database Tuning/Query Optimizations , AWS: SQS, SNS, Lambda, Route 53, Event Bridge, Step Function, ECS,ECR, RDS, DMS 3. Must have Basic knowledge of: React Framework 4. Nice to have: Elastic search, Octopus deploy, Jenkin, Code Build / Code Pipeline workflow 5. Strong troubleshooting skills. 6. Strong understanding and experience in: MySQL a. Creating SP, Complex queries, and troubleshooting 7. Good experience of using source control systems.: GIT 8. Strong understanding of Microservice `based architecture. 9. A natural communicator who can explain technical concepts in clear, plain English (both written and verbal) 10. Should be able to explain at least one of his/her development assignments with reasoning Job Type: Full-time Pay: ₹1,500,000.00 - ₹2,000,000.00 per year Benefits: Health insurance Provident Fund Experience: dotnet developer: 7 years (Required) Location: Pune, Maharashtra (Required) Work Location: In person

Posted 5 days ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

About The Company TSC Redefines Connectivity with Innovation and IntelligenceDriving the next level of intelligence powered by Cloud, Mobility, Internet of Things, Collaboration, Security, Media services and Network services, we at Tata Communications are envisaging a New World of Communications Job Description Responsible for managing customer queries related to all services and solutions delivered, including diagnosing, and resolving complex technical issues in Cloud & Security domain. The role acts as a conduit between customers and other teams such as engineering, architecture etc. for any issue resolution. This is an operational role, responsible for delivering results that have a direct impact on day-to-day operations and capable of instructing professional or technical staff and reviewing the quality of the work undertaken by these roles. Responsibilities Technical administration or troubleshooting to ensure the efficient functionality of the solution. Incident Validation, Incident Analysis, Solution recommendation Assists with the development, revision, and maintenance of Standard Operating Procedures and Working Instructions Act as a point of escalation for Level-1 customer service analysts Coordinate with IT teams on escalations, tracking, performance issues, and outages. Prepare Monthly Executive Summary Reports for managed clients and continuously improve their content and presentation. Provide recommendations in tuning and optimization of systems, processes, procedures, and policies. Maintain an inventory of the procedures used by the operations team and regularly evaluate the procedures and add, remove, and update the procedures as appropriate. Publish weekly reports and monthly reports on customer service operations activity. Desired Skill sets Good knowledge on implementation, installation, integration troubleshooting and overall functionalities Experience in troubleshooting platform related issues, data backup, restoration, retention Maintains awareness of latest technologies in the domain

Posted 5 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Velotio Technologies is a product engineering company working with innovative startups and enterprises. We are a certified and recognized as one of the best companies to work for in India. We have provided full-stack product development for 110+ startups across the globe building products in the cloud-native, data engineering, B2B SaaS, IoT & Machine Learning space. Our team of 400+ elite software engineers solves hard technical problems while transforming customer ideas into successful products. About the Role: We are seeking an experienced Infrastructure Site Reliability Engineer (SRE) to join our team. This role is critical for ensuring the reliability, scalability, and performance of our infrastructure, particularly in managing and optimizing high-throughput data systems. You will work closely with engineering teams to design, implement, and maintain robust infrastructure solutions that meet our growing needs. As an Infrastructure SRE, you will be at the forefront of managing and optimizing our Kafka and OpenSearch clusters, AWS services, and multi-cloud environments. Your expertise will be key in ensuring the smooth operation of our infrastructure, enabling us to deliver high-performance and reliable services. This is an exciting opportunity to contribute to a dynamic team that is shaping the future of data observability and orchestration pipelines. Requirements Responsibilities- Kafka Management: Set up, manage, and scale Kafka clusters, including implementing and optimizing Kafka Streams and Connect for seamless data integration. Fine-tune Kafka brokers and optimize producer/consumer configurations to ensure peak performance OpenSearch Expertise: Configure and manage OpenSearch clusters, optimizing indexing strategies and query performance. Ensure high availability and fault tolerance through effective data replication and sharding. Set up monitoring and alerting systems to track cluster health AWS Services Proficiency: Manage AWS RDS instances, including provisioning, configuration, and scaling. Optimize database performance and ensure robust backup and recovery strategies. Deploy, manage, and scale Kubernetes clusters on AWS EKS, configuring networking and security policies, and integrating EKS with CI/CD pipelines for automated deployment Multi-Cloud Environment Management: Design and manage infrastructure across multiple cloud providers, ensuring seamless cloud networking and security. Implement disaster recovery strategies and optimize costs in a multi-cloud setup Linux Administration: Optimize Linux server performance, manage system resources, and automate processes using shell scripting. Apply best practices for security hardening and troubleshoot Linux-related issues effectively CI/CD Automation: Design and manage CI/CD pipelines using tools like Jenkins, GitLab CI, or CircleCI, and ArgoCD. Automate deployment processes, integrate with version control systems, and implement advanced deployment strategies like blue-green deployments, canary releases, and rolling updates. Ensure security and compliance within CI/CD processes Qualification- Bachelor's, Master's, or Doctorate in Computer Science or a related field Deep knowledge of Kafka, with hands-on experience in cluster setup, management, and performance tuning Expertise in OpenSearch cluster management, indexing, query optimization, and monitoring Proficiency with AWS services, particularly RDS and EKS, including experience in database management, performance tuning, and Kubernetes deployment Experience in managing multi-cloud environments, with a strong understanding of cloud networking, security, and cost optimization strategies Strong background in Linux administration, including system performance tuning, shell scripting, and security hardening Proficiency with CI/CD automation tools and best practices, with a focus on secure and compliant pipeline management Strong analytical and problem-solving skills, essential for troubleshooting complex technical challenges Benefits Our Culture : We have an autonomous and empowered work culture encouraging individuals to take ownership and grow quickly Flat hierarchy with fast decision making and a startup-oriented "get things done" culture A strong, fun, and positive environment with regular celebrations of our success. We pride ourselves in creating an inclusive, diverse, and authentic environment At Velotio, we embrace diversity. Inclusion is a priority for us, and we are eager to foster an environment where everyone feels valued. We welcome applications regardless of ethnicity or cultural background, age, gender, nationality, religion, disability or sexual orientation.

Posted 5 days ago

Apply

3.0 years

0 Lacs

New Delhi, Delhi, India

Remote

Company Description Cloudologic is a prominent cloud consulting and IT service provider based in Singapore with roots in India. The company specializes in cloud operations, cyber security, and managed services. With a decade of experience, Cloudologic is known for delivering high-quality services globally and is recognized as a trusted partner in the tech industry. Role Description This is a Full time remote role for an Ansible Engineer located in New Delhi. The Ansible Engineer will be responsible for day-to-day tasks related to computer science, back-end web development, software development, programming, and object-oriented programming (OOP). We are looking for a highly skilled Ansible Engineer with strong Linux expertise to join our infrastructure and automation team. The ideal candidate will be responsible for automating server provisioning, configuration management, and deployment tasks using Ansible in complex Linux environments. You will help drive infrastructure automation, scalability, and operational efficiency across our platforms. Key Responsibilities: Develop, manage, and maintain Ansible playbooks and roles for automating Linux system configurations, deployments, and patching. Perform Linux system administration tasks including setup, tuning, troubleshooting, and performance monitoring. Automate repetitive tasks and enforce configuration consistency across environments. Collaborate with DevOps, Security, and Development teams to streamline infrastructure workflows. Design and implement scalable, secure, and fault-tolerant systems in Linux-based environments . Integrate Ansible automation with CI/CD tools such as Jenkins , GitLab CI , or Azure DevOps . Use Ansible Tower or AWX for orchestration, role-based access control, and reporting. Maintain detailed documentation for system configurations and automation standards. Participate in incident response and root cause analysis related to configuration and system issues. Requirements 3+ years of hands-on experience with Linux system administration (Red Hat, CentOS, Ubuntu, etc.). 2+ years of experience working with Ansible (playbooks, roles, modules, templates). Proficient in writing shell scripts (Bash) and basic scripting in Python. Deep understanding of system services (systemd, networking, file systems, firewalls). Familiarity with Git and version control workflows. Experience with virtualization and cloud platforms (AWS, Azure, or GCP) is a plus. Knowledge of infrastructure security and hardening Linux environments. Strong troubleshooting, diagnostic, and problem-solving skills.

Posted 5 days ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Oracle Retail Technical Consultant: Location: Noida / Hyderabad only Looking for expert in Oracle Retail solutions like RMS, ReSA, RPM, and RIB, POS. Candidate should have hands-on experience building scalable technical solutions and integrating core Oracle Retail modules. Skills Required: Oracle Retail Developer with: Troubleshoot and resolve technical issues in RMS, ReIM, ReSA, RPM, POS. Strong skills in SQL, PL/SQL, Oracle Forms/Reports, Shell scripting. Knowledge of Oracle DB performance tuning and debugging. Exposure to Oracle SOA Suite or MuleSoft. In-depth RMS job cycle and batch schedule knowledge. Familiarity with POS/Inventory systems. Programming expertise in Java and Kafka. Integrate Oracle Retail modules with external systems using APIs or middleware. Knowledge of Oracle Cloud Infrastructure (OCI). Experience implementing and configuring Oracle RMS. Proficient with GitHub & JIRA. Solid foundation in CI/CD, automated testing, and source control.

Posted 5 days ago

Apply

6.0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

JOB DESCRIPTION: DATA ENGINEER (Databricks & AWS) Overview: As a Data Engineer, you will work with multiple teams to deliver solutions on the AWS Cloud using core cloud data engineering tools such as Databricks on AWS, AWS Glue, Amazon Redshift, Athena, and other Big Data-related technologies. This role focuses on building the next generation of application-level data platforms and improving recent implementations. Hands-on experience with Apache Spark (PySpark, SparkSQL), Delta Lake, Iceberg, and Databricks is essential. Locations: Jaipur, Pune, Hyderabad, Bangalore, Noida. Responsibilities: • Define, design, develop, and test software components/applications using AWS-native data services: Databricks on AWS, AWS Glue, Amazon S3, Amazon Redshift, Athena, AWS Lambda, Secrets Manager • Build and maintain ETL/ELT pipelines for both batch and streaming data. • Work with structured and unstructured datasets at scale. • Apply Data Modeling principles and advanced SQL techniques. • Implement and manage pipelines using Apache Spark (PySpark, SparkSQL) and Delta Lake/Iceberg formats. • Collaborate with product teams to understand requirements and deliver optimized data solutions. • Utilize CI/CD pipelines with DBX and AWS for continuous delivery and deployment of Databricks code. • Work independently with minimal supervision and strong ownership of deliverables. Must Have: • 6+ years of experience in Data Engineering on AWS Cloud. • Hands-on expertise in: o Apache Spark (PySpark, SparkSQL) o Delta Lake / Iceberg formats o Databricks on AWS o AWS Glue, Amazon Athena, Amazon Redshift • Strong SQL skills and performance tuning experience on large datasets. • Good understanding of CI/CD pipelines, especially using DBX and AWS tools. • Experience with environment setup, cluster management, user roles, and authentication in Databricks. • Certified as a Databricks Certified Data Engineer – Professional (mandatory). Good To Have: • Experience migrating ETL pipelines from on-premise or other clouds to AWS Databricks. • Experience with Databricks ML or Spark 3.x upgrades. • Familiarity with Airflow, Step Functions, or other orchestration tools. • Experience integrating Databricks with AWS services in a secured, production-ready environment. • Experience with monitoring and cost optimization in AWS. Key Skills: • Languages: Python, SQL, PySpark • Big Data Tools: Apache Spark, Delta Lake, Iceberg • Databricks on AWS • AWS Services: AWS Glue, Athena, Redshift, Lambda, S3, Secrets Manager • Version Control & CI/CD: Git, DBX, AWS CodePipeline/CodeBuild • Other: Data Modeling, ETL Methodology, Performance Optimization

Posted 5 days ago

Apply

5.0 years

0 Lacs

India

Remote

About Client : Our client is one of the world's fastest-growing AI companies, accelerating the advancement and deployment of powerful AI systems. They helps customers in two ways: Working with the world’s leading AI labs to advance frontier model capabilities in thinking, reasoning, coding, agentic behavior, multimodality, multilinguality, STEM and frontier knowledge; and leveraging that work to build real-world AI systems that solve mission-critical priorities for companies. Powering this growth is our clients talent cloud—an AI-vetted pool of 4M+ software engineers, data scientists, and STEM experts who can train models and build AI applications. All of this is orchestrated by ALAN—our AI-powered platform for matching and managing talent, and generating high-quality human and synthetic data to improve model performance. ALAN also accelerates workflows for model and agent evals, supervised fine-tuning, reinforcement learning, reinforcement learning with human feedback, preference-pair generation, benchmarking, data capture for pre-training, post-training, and building AI applications. Job Title: Pascal or Delphi development. Location: Pan India Experience:5+ years Employment Type: Contract to hire Work Mode: Remote Notice Period: Immediate joiners Requirements: 4+ years of professional experience in Pascal or Delphi development. Strong understanding of procedural programming paradigms, type systems, and BEGIN…END structured blocks. Proven debugging, profiling, and performance tuning skills in Pascal applications. Solid grasp of Git, version control workflows, CI/CD processes, and testing best practices. Excellent written and verbal communication skills in English

Posted 5 days ago

Apply

12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Position : Power BI Architect Location : Hyderabad, Telangana, India Experience : 8–12 years Role Overview You will architect and deliver end‑to‑end enterprise BI solutions. This includes data ingestion, transformation, modelling, and dashboard/report development with Power BI. You will collaborate closely with stakeholders and lead junior team members to ensure high‑quality insights at scale. Key Responsibilities Architecture & Design Design scalable BI architectures, including semantic layers, data models, ETL/ELT pipelines, dashboards, and embedded analytics platforms. Data Integration & ETL Ingest, transform, and cleanse data from multiple sources (SQL, Oracle, Azure Synapse/Data Lake/Fabric, AWS services). Modeling & Query Optimization Build robust data models; write optimized DAX expressions and Power Query M code; apply performance tuning best practices. Solution Delivery Develop reports and dashboards using Power BI Desktop and Service, implement row-level/object-level security (RLS/OLS), capacity planning, and self-service BI frameworks. Cross-Platform Competency Collaborate with teams using MicroStrategy and Tableau; advise on best‑fit tools where relevant. Governance, Documentation & Quality Maintain data dictionaries, metadata, source‑to‑target mappings; support data governance initiatives. Leadership & Stakeholder Management Manage small to mid-sized developer teams, mentor juniors, engage with business users, and support pre-sales or proposal efforts. Required Qualifications & Skills Bachelor’s/Master’s degree in CS, IT, or related field. 8–12 years overall, with 5+ years of hands‑on Power BI architecture and development experience. Deep proficiency with Power BI Desktop & Service, DAX, Power Query (M), SQL/SSIS, and OLAP/tabular modeling. Strong experience in Azure frameworks such as Synapse, Fabric, and cloud-based ETL/data pipelines; AWS exposure is a plus. Experience with Tableau/MicroStrategy or other BI tools. Familiarity with Python or R for data transformations or analytics. Certification like Microsoft Certified: Power BI/Data Analyst Associate preferred. Excellent verbal and written communication skills; stakeholder-facing experience mandatory.

Posted 5 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Client: Our Client is a global IT services company headquartered in Southborough, Massachusetts, USA. Founded in 1996, with a revenue of $1.8B, with 35,000+ associates worldwide, specializes in digital engineering, and IT services company helping clients modernize their technology infrastructure, adopt cloud and AI solutions, and accelerate innovation. It partners with major firms in banking, healthcare, telecom, and media. Our Client is known for combining deep industry expertise with agile development practices, enabling scalable and cost-effective digital transformation. The company operates in over 50 locations across more than 25 countries, has delivery centers in Asia, Europe, and North America and is backed by Baring Private Equity Asia. Job Title : PowerBI and Microsoft Fabric Key Skills : PowerBI development, Microsoft Fabric, Python, DBA, Data factory, MSSQL developer with T-SQL Job Locations : Hyderabad Experience : 7 - 9 Education Qualification : Any Graduation Work Mode : Hybrid Employment Type : Contract Notice Period : Immediate - 10 Days Payroll : people prime Worldwide Job description: Senior level person who is familiar with Microsoft Fabric and PowerBI development. Would like to see Azure function apps using python and data factory experience. Not expecting production level DBA experience but need to be a strong MSSQL developer with T-SQL ,data modelling and performance tuning skills. Good communication , experience working with client teams directly Experience working in data factory environment Hands on experience in required technical skill areas Should be self-organized and able to work independently/minimum supervision Should be able work collaboratively with client and vendor teams

Posted 5 days ago

Apply

0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Role Description Role Proficiency: Independently develops error free code with high quality validation of applications guides other developers and assists Lead 1 – Software Engineering Skill Set Frontend: Angular (v8 or higher) Backend: .NET (C#, ASP.NET Core) Database: Oracle or MySQL (SQL proficiency) API: RESTful services development and integration Version Control: Git or BitBucket Unit Test Framework (Xunit/Nunit) – Experience in writing unit tests for backend code. Responsibilities Design, develop, and maintain web applications using Angular, .NET. Build RESTful APIs to support frontend and external system integration. Work with Oracle or MySQL databases for data storage and queries. Collaborate with cross-functional teams to understand requirements and deliver quality solutions. Perform unit testing, integration testing, and support UAT. Deploy and monitor applications on cloud environments (AWS/Azure) as needed. Participate in code reviews and follow best practices in software development. Contribute to performance tuning and application security enhancements. Skills net,full stack,anggular 8

Posted 5 days ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

At Airtel , we’re not just scaling connectivity—we’re redefining how India experiences digital services. With 400M+ customers across telecom, financial services, and entertainment, our impact is massive. But behind every experience is an opportunity to make it smarter . We're looking for a Product Manager – AI to drive next-gen intelligence for our customers and business. AI is a transformational technology and we are looking or skilled product managers who will work on leveraging AI to power everything from our digital platforms to customer experience. You’ll work at the intersection of machine learning, product design, and systems thinking to deliver AI-driven products that create tangible business impact—fast. What You’ll Do Lead and contribute to AI-Powered Product Strategy Define product vision and strategy for AI-led initiatives that enhance productivity, automate decisions, and personalise user interactions across Airtel platforms. Translate Business Problems into AI Opportunities Partner with operations, engineering, and data science to surface high-leverage AI use cases across workforce management, customer experience, and process automation. Build & Scale ML-Driven Products Define data product requirements, work closely with ML engineers to develop models, and integrate intelligent workflows that continuously learn and adapt. Own Product Execution End-to-End Drive roadmaps, lead cross-functional teams, launch MVPs, iterate based on real-world feedback, and scale solutions with measurable ROI. What You Need to be Successful Influential Communication - Craft clarity from complexity. You can tailor messages for execs, engineers, and field teams alike—translating AI into business value. Strategic Prioritisation - Balance business urgency with technical feasibility. You can decide what not to build, and defend those decisions with data and a narrative Systems Thinking - You can sees the big picture —how decisions in one area ripple across the business, tech stack, and user experience. High Ownership & Accountability - Operate with a founder mindset. You don't wait for direction — you can rally teams, removes blockers, deal with tough stakeholders and drives outcomes. Adaptability - You thrive in ambiguity and pivot quickly without losing sight of long-term vision—key in fast-moving digital organizations. Skills You'll Need AI / ML Fundamentals Understanding of ML model types: Supervised, unsupervised, reinforcement learning Common algorithms: Linear/logistic regression, decision trees, clustering, neural networks Model lifecycle: Training, validation, testing, tuning, deployment, monitoring Understanding of LLMs, transformers, diffusion models, vector search, etc. Familiarity with GenAI product architecture: Retrieval-Augmented Generation (RAG), prompt tuning, fine-tuning Awareness of real-time personalization, recommendation systems, ranking algorithms, etc Data Fluency Understanding Data pipelines Working knowledge of SQL and Python for analysis Understanding of data annotation, labeling, and versioning Ability to define data requirements and assess data readiness AI Product Development Defining ML problem scope: Classification vs. regression vs. ranking vs. generation Model evaluation metrics: Precision, recall, etc. A/B testing & online experimentation for ML-driven experiences ML Infrastructure Awareness Know what it takes to make things work and happen. Model deployment techniques: Batch vs real-time inference, APIs, model serving Monitoring & drift detection: How to ensure models continue performing over time Familiarity with ML platforms/tools: TensorFlow, PyTorch, Hugging Face, Vertex AI, SageMaker, etc. (at a product level) Understanding latency, cost, and resource implications of ML choices AI Ethics & Safety We care deeply about our customers, their privacy and compliance to regulation. Understand Bias and fairness in models: How to detect and mitigate them Explainability & transparency: Importance for user trust and regulation Privacy & security: Understanding implications of sensitive or PII data in AI Alignment and guardrails in generative AI systems Preferred Qualifications Experienced Machine Learning/Artificial Intelligence PMs Experience building 0-1 products, scaled platforms/ecosystem products, or ecommerce Bachelor's degree in Computer Science, Engineering, Information Systems, Analytics, Mathematics Masters degree in Business Why Airtel Digital? Massive Scale : Your products will impact 400M+ users across sectors Real-World Relevance : Solve meaningful problems for our customers — protecting our customers, spam & fraud prevention, personalised experiences, connecting homes. Agility Meets Ambition : Work like a startup with the resources of a telecom giant AI That Ships : We don’t just run experiments. We deploy models and measure real-world outcomes Leadership Access : Collaborate closely with CXOs and gain mentorship from India’s top product and tech leaders

Posted 5 days ago

Apply

3.0 years

0 Lacs

Greater Delhi Area

On-site

Job Title: AI/LLM Developer Location: Subharti University, Meerut Department: IT Department Experience: 1–3 years (Freshers with strong AI/ML project work may also apply) Role Overview: We are seeking a motivated and technically skilled AI Developer with experience in working on Large Language Models (LLMs) and AI-driven applications. The ideal candidate will play a key role in designing, developing, fine-tuning, and deploying LLM-based solutions for internal projects such as smart assistants, document Q&A systems, and ERP automation using AI. Key Responsibilities: Work with LLMs (e.g., GPT, Claude, Gemini, LLaMA) to develop AI-driven features and tools. Build document-grounded Q&A systems , chatbots , and retrieval-augmented generation (RAG) pipelines. Fine-tune or use pre-trained LLMs via APIs or open-source frameworks . Integrate LLM features into existing systems such as ERP, knowledge bases, or student services. Collaborate with software and research teams to translate functional needs into AI capabilities. Maintain documentation and ensure data security, privacy, and ethical usage of AI. Stay updated with the latest trends in LLMs, NLP, and generative AI tools. Required Skills: Strong understanding of Natural Language Processing (NLP) and LLMs . Hands-on experience with OpenAI (GPT), Hugging Face, LangChain, LlamaIndex, or similar tools . Experience using APIs for LLMs (OpenAI, Anthropic, Google Gemini, etc.) Knowledge of Python and relevant libraries (Transformers, LangChain, etc.) Familiarity with vector databases (like FAISS, Pinecone, Chroma) is a plus. Basic understanding of REST APIs, backend integration, and software development practices. Qualification: Bachelor’s or Master’s degree in Computer Science, AI, Data Science, or related field . Certification in AI/ML, NLP, or LLMs (preferred but not mandatory). Demonstrated AI/NLP project work (internship, GitHub, or academic research). Desirable Traits: Creative problem solver with curiosity in emerging AI technologies. Strong communication and documentation skills. Self-driven and able to work independently and in a team.

Posted 5 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies