Jobs
Interviews

21457 Nosql Jobs - Page 47

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title - S&C Global Network - AI - CDP - Marketing Analytics - Analyst Management Level: 11-Analyst Location: Bengaluru, BDC7C Must-have skills: Data Analytics Good to have skills: Ability to leverage design thinking, business process optimization, and stakeholder management skills. Job Summary: This role involves driving strategic initiatives, managing business transformations, and leveraging industry expertise to create value-driven solutions. Roles & Responsibilities: Provide strategic advisory services, conduct market research, and develop data-driven recommendations to enhance business performance. WHAT’S IN IT FOR YOU? As part of our Analytics practice, you will join a worldwide network of over 20k+ smart and driven colleagues experienced in leading AI/ML/Statistical tools, methods and applications. From data to analytics and insights to actions, our forward-thinking consultants provide analytically-informed, issue-based insights at scale to help our clients improve outcomes and achieve high performance. What You Would Do In This Role A Consultant/Manager for Customer Data Platforms serves as the day-to-day marketing technology point of contact and helps our clients get value out of their investment into a Customer Data Platform (CDP) by developing a strategic roadmap focused on personalized activation. You will be working with a multidisciplinary team of Solution Architects, Data Engineers, Data Scientists, and Digital Marketers. Key Duties and Responsibilities: Be a platform expert in one or more leading CDP solutions. Developer level expertise on Lytics, Segment, Adobe Experience Platform, Amperity, Tealium, Treasure Data etc. Including custom build CDPs Deep developer level expertise for real time even tracking for web analytics e.g., Google Tag Manager, Adobe Launch etc. Provide deep domain expertise in our client’s business and broad knowledge of digital marketing together with a Marketing Strategist industry Deep expert level knowledge of GA360/GA4, Adobe Analytics, Google Ads, DV360, Campaign Manager, Facebook Ads Manager, The Trading desk etc. Assess and audit the current state of a client’s marketing technology stack (MarTech) including data infrastructure, ad platforms and data security policies together with a solutions architect. Conduct stakeholder interviews and gather business requirements Translate business requirements into BRDs, CDP customer analytics use cases, structure technical solution Prioritize CDP use cases together with the client. Create a strategic CDP roadmap focused on data driven marketing activation. Work with the Solution Architect to strategize, architect, and document a scalable CDP implementation, tailored to the client’s needs. Provide hands-on support and platform training for our clients. Data processing, data engineer and data schema/models expertise for CDPs to work on data models, unification logic etc. Work with Business Analysts, Data Architects, Technical Architects, DBAs to achieve project objectives - delivery dates, quality objectives etc. Business intelligence expertise for insights, actionable recommendations. Project management expertise for sprint planning Professional & Technical Skills: Relevant experience in the required domain. Strong analytical, problem-solving, and communication skills. Ability to work in a fast-paced, dynamic environment. Strong understanding of data governance and compliance (i.e. PII, PHI, GDPR, CCPA) Experience with analytics tools like Google Analytics or Adobe Analytics is a plus. Experience with A/B testing tools is a plus. Must have programming experience in PySpark, Python, Shell Scripts. RDBMS, TSQL, NoSQL experience is must. Manage large volumes of structured and unstructured data, extract & clean data to make it amenable for analysis. Experience in deployment and operationalizing the code is an added advantage. Experience with source control systems such as Git, Bitbucket, and Jenkins build and continuous integration tools. Proficient in Excel, MS word, PowerPoint, etc Technical Skills: Any CDP platforms experience e.g., Lytics CDP platform developer, or/and Segment CDP platform developer, or/and Adobe Experience Platform (Real time – CDP) developer, or/and Custom CDP developer on any cloud GA4/GA360, or/and Adobe Analytics Google Tag Manager, and/or Adobe Launch, and/or any Tag Manager Tool Google Ads, DV360, Campaign Manager, Facebook Ads Manager, The Trading desk etc. Deep Cloud experiecne (GCP, AWS, Azure) Advance level Python, SQL, Shell Scripting experience Data Migration, DevOps, MLOps, Terraform Script programmer Soft Skills: Strong problem solving skills Good team player Attention to details Good communication skills Additional Information: Opportunity to work on innovative projects. Career growth and leadership exposure. About Our Company | Accenture

Posted 4 days ago

Apply

3.0 - 7.0 years

0 Lacs

kerala

On-site

Job Description: We are looking for a skilled Full Stack Software Engineer who is passionate about innovation and solving real-world problems using technology. As a Full Stack Software Engineer at our fast-growing technology company, you will be responsible for developing cutting-edge products in the fields of IoT, AI, and embedded systems. You will work on building scalable web platforms, integrating AI models into production environments, and collaborating with cross-functional teams to deliver innovative products. Key Responsibilities: - Develop and maintain full stack web applications (frontend + backend) - Build scalable RESTful APIs and integrate AI/ML models into live systems - Optimize performance and scalability of systems - Maintain clear documentation and write clean, maintainable code Required Skills: - Proficiency in Python (FastAPI, Flask, or Django) - Experience with frontend frameworks like React, Vue.js, or Angular - Solid understanding of HTML, CSS, JavaScript, and REST APIs - Knowledge of AI/ML libraries (e.g., TensorFlow, PyTorch, Scikit-learn) - Experience with SQL/NoSQL databases (e.g., PostgreSQL, MongoDB) - Familiarity with version control (Git) and Agile development practices Job Type: Full-time Schedule: Day shift Work Location: In person,

Posted 4 days ago

Apply

0.0 - 1.0 years

0 - 0 Lacs

Sholinganallur, Chennai, Tamil Nadu

On-site

Role Overview We are looking for a motivated Python/Django Developer Intern to join our backend engineering team. You will be responsible for developing and maintaining the server-side logic, APIs, and database structures that power our telemedicine platform. This is a hands-on role ideal for recent graduates looking to gain practical experience in backend development within a healthcare-focused environment. Key Responsibilities Develop, maintain, and optimize backend services using Python and Django . Design and implement RESTful APIs for patient, doctor, and admin portals. Work with PostgreSQL/MongoDB for secure and efficient data storage. Integrate third-party APIs for video/audio consultations, AI modules, and healthcare data sources. Implement authentication, authorization, and data encryption for HIPAA compliance . Collaborate with frontend (React) and mobile app development teams. Debug and resolve technical issues to ensure platform stability. Participate in sprint planning, daily stand-ups, and code reviews. Required Skills & Qualifications Basic knowledge of Python and Django framework . Understanding of REST API design and JSON data handling. Familiarity with relational databases (PostgreSQL/MySQL) and/or NoSQL (MongoDB). Basic knowledge of HTML, CSS, JavaScript for integration purposes. Familiarity with Git/GitHub version control. Strong problem-solving, debugging, and communication skills. Preferred (Good to Have) Knowledge of Django REST Framework (DRF) . Understanding of healthcare-related compliance (HIPAA, GDPR). Experience with cloud platforms ( AWS, Azure, or GCP ). Exposure to Celery, Redis, or background task processing . Basic understanding of unit testing and CI/CD pipelines . Benefits Work on a live telemedicine platform impacting healthcare accessibility. Learn about AI-powered healthcare applications . Hands-on experience with secure healthcare systems and compliance standards. Opportunity for a full-time role after internship completion. Internship Completion Certificate and Letter of Recommendation. Job Type: Internship Contract length: 6 months Pay: ₹5,000.00 - ₹10,000.00 per month Ability to commute/relocate: Sholinganallur, Chennai, Tamil Nadu: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): Have you did a Django Development Course in institution? How do you rate yourself in Django (0-10)? How do you rate yourself in PostgreSql? Education: Higher Secondary(12th Pass) (Preferred) Experience: Python: 1 year (Preferred) Django: 1 year (Preferred) PostgreSQL: 1 year (Preferred) Work Location: In person

Posted 4 days ago

Apply

5.0 - 12.0 years

0 Lacs

coimbatore, tamil nadu

On-site

As a Data Software Engineer at KG Invicta Services Pvt Ltd, you will leverage your 5-12 years of experience in Big Data & Data-related technologies to drive impactful solutions. Your expertise in distributed computing principles and Apache Spark, coupled with hands-on programming skills in Python, will be instrumental in designing and implementing efficient Big Data solutions. You will demonstrate proficiency in a variety of tools and technologies including Hadoop v2, Map Reduce, HDFS, Sqoop, Apache Storm, Spark-Streaming, Kafka, RabbitMQ, Hive, Impala, and NoSQL databases such as HBase, Cassandra, and MongoDB. Your ability to integrate data from diverse sources like RDBMS, ERP, and files, along with knowledge of ETL techniques and frameworks, will ensure seamless data processing and analysis. Performance tuning of Spark jobs, familiarity with Cloud data services like AWS and Azure Databricks, and the capability to lead a team effectively will be key aspects of your role. Your expertise in SQL queries, joins, stored procedures, and relational schemas will contribute to the optimization of data querying processes. Your experience with AGILE methodology and a deep understanding of Big Data querying tools will enable you to contribute significantly to the development and enhancement of stream-processing systems. You will collaborate with cross-functional teams to deliver high-quality solutions that meet business requirements. If you are passionate about leveraging data to drive innovation and possess a strong foundation in Spark, Python, and Cloud technologies, we invite you to join our team as a Data Software Engineer. This is a full-time position with a day shift schedule, and the work location is in person. Category: ML/AI Engineers, Data Scientist, Software Engineer, Data Engineer Expertise: Python (5 Years), AWS (3 Years), Apache Spark (5 Years), PySpark (3 Years), GCP (3 Years), Azure (3 Years), Apache Kafka (3 Years),

Posted 4 days ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

You will be leading a team of 20 to 25 software developers, providing guidance and support to ensure high-quality web applications are delivered. Your responsibilities will include programming, designing, developing, and deploying new applications. It is crucial to take technical ownership of project deliveries, ensuring their quality and completeness. Collaboration with various teams on design and implementation strategies is essential, demonstrating your ability to work within an agile environment. Your role will involve adhering to Agile Development and project management methodologies, overseeing project planning, timelines, and milestones to meet execution and deliverables. The timely delivery of assignments is a key aspect of your responsibility. Furthermore, you will be mentoring and leading team members to enhance the quality and productivity of deliverables continuously. In terms of skillsets required, you should have strong hands-on knowledge of Server-Side Scripting languages like NodeJS Web Framework. End-to-end understanding from development to deployment is crucial. Responsibilities include developing and maintaining server-side network components, ensuring optimal database performance, collaborating with front-end developers, running diagnostic tests, and providing technical support. Proficiency in ReactJs, JavaScript ecosystem, relational databases like MySQL, NoSQL understanding, version control tools like Git and Subversion, HTTP protocol, REST APIs, JSON, HTML, and CSS is expected. Additionally, exposure to building CRM-based technology solutions in the past would be advantageous. (Note: The job description mentioned is a summary and may not cover all details. Kindly refer to the original job posting for comprehensive information.),

Posted 4 days ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist. The individual in this role as ML Engineer will be accountable for Design, develop, and deploy machine learning models and algorithms to solve business problems. Below are the key responsibilities / activities that need to be planned, attested and executed under the remit of this role by working effectictly and collborattively with the different delivery teams. In this role, you will: Design, develop, and deploy machine learning models and algorithms to solve business problems. Preprocess and analyze large datasets to extract meaningful insights. Build and optimize data pipelines (Extract data from Oracle, Elasticsearch, storage buckets, etc.) for training and deploying ML models. Collaborate with data scientists, software engineers, and stakeholders to integrate ML solutions into production systems. Monitor and maintain deployed models to ensure performance and accuracy over time. Research and implement state-of-the-art machine learning techniques and tools. Document processes, experiments, and results for reproducibility and knowledge sharing. Stay up to date with tech, prototype with and learn new technologies, proactive in technology communities Develop & maintain ML models for supervision domain. E.g. Anomaly detection, Global Search Engine, ChatBot, specialized/customized models Develop innovative solutions in areas such as machine learning, Natural Language Processing (NLP), advanced and semantic information search, extraction, induction, classification and exploration Create products that provide a great user experience along with high performance, security, quality, and stability Requirements To be successful in this role, you should meet the following requirements: Minimum 7 years of software development experience. 2+ years of relevant experience in ML technologies mentioned below. Excellent problem-solving and communication skills. Strong experience in Python (3.x). Excellent working knowledge on scikit-learn, TensorFlow / PyTorch, Docker/Kubernetes. Good experience of SQL, Oracle/PostgreSQL, any NoSQL database, File buckets. Excellent knowledge and demonstrable experience in using open source NLP packages such as NLTK, Word2Vec, SpaCy. Experience in setting up supervised & unsupervised learning models including data cleaning, data analytics, feature creation, model selection & ensemble methods, performance metrics & visualization Solid understanding of ML algorithms, ML statistics, and data structures. Excellent interpersonal, presentation and analytical skills. What additional skills will be good to have? Familiarity with MLOps practices for model deployment and monitoring. Experience working in investment banking domain with exposure trade life cycle, front office controls supervision. Experience in automating the continuous integration/continuous delivery pipeline within a DevOps Product/Service team driving a culture of continuous improvement You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India

Posted 4 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

We require a Full Stack Developer. Please find below the Job Description (JD) for your reference : Contract duration 6 Months (Extendable). Exact on-site location in Pune 3rd Floor, Pride Portal, Senapati Bapat Rd, behind J W Mariott, Bahiratwadi, Bhageerath, Gokhalenagar, Pune, Maharashtra 411016. Exp5 yrs relevant 4 yrs. Scope Of Work Good knowledge of Next.js, React, JavaScript/TypeScript, HTML5, CSS3, and Tailwind CSS for responsive and accessible frontend development. Strong expertise in NestJS, Node.js, Express.js, and backend microservices development with RESTful and GraphQL APIs. Hands-on experience with SQL and NoSQL databases such as PostgreSQL, MySQL, and MongoDB, including writing optimized queries, stored procedures, and schema modeling. Integration, enhancement, and support of full stack applications leveraging Next.js (SSR/SSG/ISR) and NestJS, including performance tuning and code optimization. Development and maintenance of RESTful APIs, GraphQL APIs, and service integration layers; handling API versioning and documentation (e.g., Swagger/OpenAPI). Implementation of authentication and authorization mechanisms using JWT, OAuth2, SSO, or session-based auth flows. Active participation in Scrum ceremonies daily stand-ups, sprint planning, backlog grooming, sprint reviews, and retrospectives in alignment with Agile methodologies. Work closely with cross-functional teams, including QA, DevOps, Designers, Product Owners, and Business Analysts to deliver cohesive features and solutions. Contribute to project planning, estimation, and documentation by participating in requirements analysis and effort estimation, and updating task progress in tracking tools (e.g., JIRA, Trello, or Azure Boards). Follow CI/CD best practices, Docker-based deployments, and cloud-native principles for deployment in environments like Vercel, AWS, or GCP. Write and maintain unit, integration, and e2e tests using tools such as Jest, React Testing Library, SuperTest, etc., ensuring code quality and stability. Provide support during UAT, production deployments, and incident resolution, including root cause analysis and bug fixes. Contribute to internal knowledge sharing, team mentoring, and documentation of best practices to strengthen team capability and continuous improvement. Collaborate with internal stakeholders and external technical teams, ensuring alignment with business goals, timelines, and project deliverables. Support application migration efforts, including dependency upgrades, refactoring legacy modules, testing, and regression fixes. Adhere to PMP best practices in change control, scope management, risk identification, communication, and stakeholder alignment during the development lifecycle. (ref:hirist.tech)

Posted 4 days ago

Apply

4.0 - 8.0 years

0 Lacs

haryana

On-site

We are looking for a skilled Databricks Engineer with Azure/AWS experience for our client, a global business and technology services firm. As a Data Engineer, you will play a crucial role in designing, developing, and maintaining our data infrastructure. The ideal candidate should have a minimum of 4+ years of experience in Azure/AWS Data Engineering with expertise in Databricks, Synapse/Redshift & ADF/Glue. As a Databricks Engineer in Azure/AWS, you will be working at a Senior Consultant level, requiring at least 4+ years of hands-on experience in Azure/AWS Data Engineering. The essential skills for this role include Data Engineering, Databricks, Synapse/Redshift, and ADF/Glue. The position is based in Kolkata. Responsibilities: - Demonstrate a strong Databricks data engineering experience, including creating medallion architecture and Unity CatLog - Design and implement data architecture solutions that meet business requirements - Develop and maintain data models, data dictionaries, and data flow diagrams - Collaborate with Business owners to understand and clarify requirements - Possess a solid understanding of data management technologies like SQL, NoSQL, and Hadoop - Experience with data integration and ETL tools - Hands-on experience in Azure or AWS data engineering on Data services Please note that due to the high volume of applicants, not everyone will receive a response. About Us: CuratAId is a tech hiring platform in India that connects recruiters with pre-vetted candidates. Our expert interviewers evaluate candidates based on their technical skills, communication, and behavioral traits, ensuring recruiters get high-quality candidates. Our goal is to simplify and streamline the recruitment process for both recruiters and job seekers.,

Posted 4 days ago

Apply

15.0 - 19.0 years

0 Lacs

pune, maharashtra

On-site

As a part of the Citi Analytics & Information Management (AIM) team in the Financial Crimes & Fraud Prevention Analytics unit within the Fraud Operation team, you will have the opportunity to lead a team of data scientists in Pune/Bangalore. Reporting to the Director/Managing Director, AIM, your primary focus will be to develop and implement Machine Learning (ML)/AI/Gen AI models for fraud prevention. You will analyze data, identify fraud patterns, and work towards achieving overall business goals. Additionally, you will collaborate with the model implementation team, ensure model documentation, and address questions from model risk management (MRM) while adapting to changing business needs. Your role as a subject matter expert (SME) in ML/AI/Gen AI will require a strong understanding of AI and ML concepts to guide your team effectively. You will lead a team of data scientists in developing and implementing ML/AI/Gen AI models on various platforms, providing technical leadership and ensuring 100% execution accuracy. Your expertise in customizing and fine-tuning RAG frameworks, designing new frameworks, and implementing state-of-the-art ML/AI/Gen AI algorithms will be crucial in meeting and exceeding project requirements. To excel in this role, you must possess a minimum of 15+ years of analytics experience in core model development using ML/AI/Gen AI techniques. A strong knowledge of model development stages, industry best practices, and the ability to recommend appropriate algorithms for business solutions are essential. Your proficiency in coding, Bigdata environments, and various ML/DL applications will be instrumental in delivering projects successfully. Additionally, you should have experience in model execution and governance in any domain. As a people manager overseeing a team of 15+ data scientists, some of whom may be managers themselves, your responsibilities will include managing their career progression, conflict resolution, performance management, coaching, mentorship, and technical guidance. You will be expected to set high performance standards, provide mentorship, and retain talent while effectively managing attrition and career mobility. Your ability to communicate complex analytical concepts to both technical and non-technical audiences, influence business outcomes, and drive innovative solutions will be critical in this role. With excellent project management skills, strategic thinking abilities, and a proactive approach to risk mitigation, you will play a key role in leading the fraud operation function within AIM Financial Crimes and Fraud Prevention Analytics. If you are passionate about leveraging AI and ML technologies to combat financial crimes and fraud, and possess the requisite experience and skills outlined above, we encourage you to apply for this challenging and rewarding opportunity at Citi.,

Posted 4 days ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

As a Backend Engineer - SDE III at Prozo, you will be part of our core technology team responsible for developing and maintaining backend services for our tech-enabled supply chain platform. Your role will involve working on high-impact projects in a collaborative environment focused on performance. You will be expected to develop and maintain RESTful APIs and web services to facilitate seamless communication between frontend and backend systems. Additionally, designing and implementing data models and database schemas to support application features will be a key responsibility. Your expertise in Java, Spring Framework, and SQL and/or NoSQL technologies will be essential in building Java-based backend applications for our supply chain software products. Qualifications & Experience: - Bachelor's degree in Computer Science, Software Engineering, or related field. - Minimum 5 years of experience in Java and NoSQL-based databases. - Proficiency in Java, Spring Framework, and SQL and/or NoSQL technologies. - Experience in developing significant modules from scratch and managing a team of developers. - Familiarity with RESTful APIs, microservices architecture, and containerization. - Knowledge of agile development methodologies, version control systems like Git, and writing clean, testable code. - Strong analytical, problem-solving, communication, and collaboration skills. At Prozo, you will have the opportunity to work with an innovative team dedicated to transforming the supply chain industry. We offer a supportive and collaborative work environment where your contributions are valued and recognized. Professional growth opportunities and direct interactions with senior leadership are also provided. To apply, please submit your resume and a cover letter highlighting your relevant experience and passion for working at Prozo. Share any past projects or accomplishments demonstrating your proficiency in warehousing, logistics, and technology-driven supply chain solutions. Please note that our company policies prohibit moonlighting, and we have alternate Saturdays off. We utilize HRMS tools such as Keka and TimeDoctor to manage employee productivity, attendance, and performance. Prozo is an equal opportunity employer that values diversity and is committed to fostering an inclusive environment for all employees.,

Posted 4 days ago

Apply

7.0 - 11.0 years

0 Lacs

karnataka

On-site

As a Java Backend Developer at Morgan Stanley, you will be an integral part of the FRPPE-FinTech team based in Bangalore. Your role will involve contributing to the transformation of how Morgan Stanley operates, playing a crucial role in the development and maintenance of systems that support the global Operations of Morgan Stanley. Within the Technology division, you will collaborate with various stakeholders to design, develop, and maintain backend/frontend services. You will actively participate in design discussions, ensure code quality, and review code to support transformation initiatives and deliver new features for the platform. To excel in this position, you should have a minimum of 7 years of strong experience in Java 8+ and be proficient in working with Microservices architecture, RESTful API Design, Kafka, NoSQL, and RDBMS. Your expertise should extend to frameworks for development, Object-Oriented Design, Design patterns, Architecture, and Application Integration. Additionally, a background in problem-solving, system integration, infrastructure debugging, or system administration would be advantageous. Desirable skills for this role include experience with Web UI JS Framework AngularJS and Distributed Caching like Redis. Morgan Stanley offers a collaborative environment where you will have the opportunity to create, innovate, and have a significant impact on the world. At Morgan Stanley, you can expect a commitment to maintaining excellence and providing top-notch service. The core values of the company focus on putting clients first, doing the right thing, leading with exceptional ideas, committing to diversity and inclusion, and giving back. With a diverse team and a culture that values unique perspectives and cross-collaboration, Morgan Stanley is dedicated to supporting employees and their families throughout their work-life journey. The company offers attractive employee benefits and perks while fostering a culture of inclusion and empowerment.,

Posted 4 days ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

The role of Backend Developer (Node.js) at our company located in ITPL, Whitefield, Bengaluru requires a professional with 3-4 years of experience in backend development. If you are enthusiastic about constructing scalable, high-performance backend systems and enjoy working collaboratively, we are excited to consider you for this position. As a Backend Developer, your responsibilities will include designing, developing, and maintaining RESTful APIs and microservices using Node.js. You will be required to write clean, efficient, and well-documented code while implementing and managing NoSQL databases, specifically DynamoDB. It will be your duty to ensure the performance, scalability, and security of the backend systems, focusing on data-driven continuous improvement to boost system efficiency. Collaboration with front-end engineers for integrating user-facing elements with server-side logic will be an integral part of your role. Additionally, you will need to write unit and integration tests to uphold high code quality standards, participate in code reviews, and contribute to process enhancements. Staying updated with the latest technologies and trends in backend development is also expected. The ideal candidate must possess 3-4 years of experience in backend development, a strong comprehension of Node.js with hands-on experience in constructing RESTful APIs, and familiarity with NoSQL databases, particularly DynamoDB. A good understanding of microservices architecture and design patterns, along with knowledge of unit testing frameworks and best practices, is essential. Exceptional problem-solving and debugging skills are required, alongside the ability to work both independently and collaboratively within a team. Strong communication and collaboration skills are crucial for this role. Desirable skills for this position include experience with Java, Golang, or Python, knowledge of containerization technologies like Kubernetes, proficiency with cloud platforms such as AWS or Google Cloud, and familiarity with DevOps practices. If you meet the above requirements and are proficient in node.js, python, microservices, java, go (golang), restful APIs, unit testing, problem-solving, debugging, collaboration, dynamodb, and NoSQL, we encourage you to apply for this exciting opportunity.,

Posted 4 days ago

Apply

1.0 - 5.0 years

0 Lacs

hyderabad, telangana

On-site

At Apple, we value the diverse backgrounds and perspectives of our employees, which drive innovation and lead to extraordinary ideas that delight our customers. We welcome ideas from every individual, including you. The Apple E-Business Services team is currently looking for a talented Integration Software Engineer who is hands-on and passionate about developing scalable integration platforms. This role presents an exciting opportunity for a self-motivated and results-oriented individual to design and construct Java-based B2B infrastructure components using cutting-edge technologies like object storage and NoSQL databases. Working in our fast-growing business will empower you to "Think Different" and make a significant impact on Apple's success. As an Integration Software Engineer at Apple, your responsibilities will include: - Designing and implementing frameworks for processing high-volume transactions with Apple's partners. - Providing technical leadership to enhance and scale our B2B platforms effectively. - Creating solutions to optimize availability and consistency for applications deployed across various data centers and cloud providers. - Enhancing frameworks for managing persistence, event processing, uniqueness, transaction correlation, and notifications. - Collaborating closely with project developers, operations teams, and systems engineers in domain-specific projects. Minimum Qualifications: - Bachelor's degree in computer science or a related field, along with at least 3 years of experience in integration technologies. - More than 3 years of strong programming experience in Java for building middleware or backend applications. - Over 3 years of experience using Java frameworks like Spring. - Minimum of 1 year of experience in developing frameworks using middleware tools such as webMethods or Mulesoft. Preferred Qualifications: - Strong skills in object-oriented design and analysis. - Extensive experience (over 3 years) working with relational databases like Oracle and NoSQL databases such as MongoDB. - Proficiency in HTTP/S, TCP, DNS, and web application load balancing. - Deep understanding of security concepts and protocols, including authentication, authorization, encryption, SSL/TLS, SSH/SFTP, and more. - Knowledge of scripting languages like bash/Perl. - Hands-on experience in performance tuning of applications and databases, with a preference for cloud-based solutions integration. - Familiarity with Agile development methodology. - Understanding of AI/ML fundamentals is a plus. - Results-driven mindset with a strong sense of ownership and accountability. - Excellent problem-solving skills and ability to collaborate effectively in a fast-paced environment. - Strong communication skills to engage with stakeholders across different levels of the organization. - Ability to influence others and drive successful outcomes. - Proven track record of achieving outstanding results in a professional career. Apple is dedicated to fostering an inclusive and diverse work environment, and we provide equal opportunities to all applicants. We are committed to supporting candidates with physical and mental disabilities by offering reasonable accommodations during the recruitment process. If you are ready to contribute your expertise and creativity to a dynamic team, we encourage you to submit your CV to us.,

Posted 4 days ago

Apply

7.0 - 11.0 years

0 Lacs

karnataka

On-site

At Goldman Sachs, our Engineers play a crucial role in making things not just happen, but possible. By connecting people and capital with innovative ideas, our engineering teams tackle the most complex challenges and deliver solutions that transform the world. We are dedicated to building highly scalable software, designing low latency infrastructure solutions, proactively safeguarding against cyber threats, and utilizing machine learning in partnership with financial engineering to translate data into actionable insights. Join us in creating new opportunities, revolutionizing finance, and embracing the fast-paced world of markets. Our Engineering division, inclusive of the Technology Division and global strategist groups, lies at the heart of our business. The dynamic environment we operate in demands innovative strategic thinking and real-time solutions. If you are eager to explore the boundaries of digital innovation, this is the place to start. We are seeking individuals who are not just Engineers at Goldman Sachs but innovators and problem-solvers. Our team members work on solutions related to risk management, big data, and much more. We value creative collaborators who can adapt to change, evolve, and thrive in a dynamic global setting. Transaction Banking, a unit within Platform Solutions, is committed to offering comprehensive cash management solutions to corporations. By combining the legacy and strength of a 155-year-old financial institution with the agility of a tech start-up, Transaction Banking aims to deliver an unparalleled client experience. Through the integration of modern technologies centered around data and analytics, we empower our customers with tools that prioritize value, transparency, and simplicity to enhance cash flow management efficiency. The Digital Engineering team at Transaction Banking is entrusted with providing a seamless digital experience to clients engaging with Transaction Banking products across various interfaces, including Banking as a Service API, client portal, Files, and the SWIFT network. Our mission is to develop a cutting-edge digital interface that aligns with the needs of our corporate clients. With a clean slate, our singular focus is on constructing a highly scalable, resilient, 24x7 available cloud-based platform that our corporate clients can depend on for their cash management requirements. In our flat structure, team members are encouraged to evolve through the software life-cycle and collaborate closely with product owners, business stakeholders, and operations users. As a Senior Software Engineer on our global team, you will work on diverse components and lead projects alongside passionate engineers, product owners, and clients. Your responsibilities include contributing to the vision, understanding the product roadmap, integrating business value with user experience, and fostering an engineering culture within the team. We are seeking individuals with high energy levels, excellent communication skills, a passion for engineering challenges, a commitment to delivering high-quality technology products, and the ability to thrive in a rapidly changing environment. If you embody these qualities, we are excited to hear from you. Basic Qualifications: - Minimum of 7 years of relevant professional experience utilizing a modern programming language, preferably Java - Proficiency in building external APIs using REST and Webhooks - Demonstrated ability to lead engineering teams and deliver complex products with multiple stakeholders - Experience in architecting and designing full stack applications in AWS - Previous involvement with high availability, mission-critical systems using active/active and blue-green deployment methodology - Bachelor's degree or higher in Computer Science or equivalent work experience Preferred Qualifications: - Familiarity with Microservice architectures and REST APIs - Proficiency in Spring Boot, Kafka, and React - Experience with SQL databases (PostgreSQL/Oracle) and NoSQL databases (DynamoDB/MongoDB) - Knowledge of AWS - Background in Financial Services or Fintech is advantageous - Practical experience with containers is a plus - Comfort with Agile operating models, including practical experience with Scrum and Kanban,

Posted 4 days ago

Apply

7.0 - 11.0 years

0 Lacs

delhi

On-site

As a Full-Stack Developer / Acting CTO in the Fashion & Lifestyle Tech industry based in Chhattarpur, Delhi, you will be responsible for leading the development of a next-generation platform that aims to revolutionize the fashion and lifestyle space. With over 7 years of experience in full-stack development and leadership, you will have the opportunity to work on cutting-edge technologies to deliver a seamless virtual shopping experience. The platform you will be working on is designed to provide users with a holistic shopping experience by leveraging multi-surface integrations, AI-driven personalization, and immersive interaction. With Growify Digital's deep presence in the Indian luxury fashion ecosystem, you will have access to a high-value client base, ensuring faster adoption, sharper feedback, and increased chances of success from the beginning. Your primary focus will be on building multiple Shopify apps, browser extensions, a core web platform with Single Sign-On capabilities, a rule-based recommendation engine evolving into AI/ML personalization, a centralized and scalable database, data cleaning and pipeline systems, and an API layer for seamless integration across various platforms and services. In addition to architecting and developing the core platform for scalability, speed, and security, you will lead the technical roadmap from MVP to large-scale adoption. As the Acting CTO, you will also be responsible for recruiting, mentoring, and managing the tech team, ensuring best-in-class user experience and tech performance, transitioning to AI/ML for personalization, and collaborating closely with product and business teams to align with market needs. The required skills for this role include full-stack expertise in React/Vue + Node.js/Python, database architecture knowledge in SQL & NoSQL, API mastery in REST, GraphQL, OAuth/SSO, Shopify app development skills, browser extension development experience, AI/ML integration using frameworks like TensorFlow or PyTorch, and familiarity with cloud infrastructure (AWS, GCP, Azure) and DevOps best practices. Preferred qualifications include experience in fashion/lifestyle e-commerce or AR/VR tech, knowledge of computer vision for virtual try-on, and prior CTO or startup tech leadership experience. This role offers a unique opportunity to work on a cutting-edge project at the intersection of fashion, lifestyle, and AI, with immediate adoption through Growify's existing client base, strong market expertise from day one, and a high-impact position with ownership and creative control. If you are looking for a role that offers technical variety, strategic ownership, and the excitement of building a product that could redefine online fashion commerce, this is the perfect opportunity for you.,

Posted 5 days ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

The Staff Engineer, Backend Engineering in Digital Business at Sony Pictures Networks, located in Bangalore, is responsible for leading the architecture, design, and development of scalable backend systems that support SonyLIV's digital experiences. As a technical leader and mentor, you will ensure high performance, reliability, and innovation across services and platforms. Your role will involve collaborating with internal stakeholders such as Frontend Engineering, DevOps, QA, Product Management, Data Engineering, and Security, as well as external partners like tech vendors, third-party APIs/integrations, and CDN partners. To qualify for this position, you should have a Bachelors or Masters degree in Computer Science, Information Technology, or a related field, along with hands-on experience with scalable backend systems, cloud platforms, and modern backend stacks such as Node.js, No/SQL, REDIS, and Kafka. This role requires strong coding skills with a focus on architecture, design, and development of scalable backend services for SonyLIV's core OTT platform. You will also be responsible for architecting microservices, mentoring junior engineers, conducting design/code reviews, and promoting best practices within the team. Key responsibilities include owning end-to-end critical services, collaborating with cross-functional teams to define and release product features, ensuring service uptime and performance through effective monitoring, contributing to technical strategy and architectural decisions, and driving automation, CI/CD, and deployment excellence. As a successful candidate, you should have expertise in scalable backend architecture, RESTful API design, RDBMS/NoSQL databases, message queues, event-driven systems, cloud-native infrastructure, containerization, caching strategies, security principles, and system design. Additionally, familiarity with observability tools is essential for this role. Personal characteristics that are highly valued for this position include tech agility, an analytical mindset, problem-solving attitude, sense of ownership, passion for clean code and performance optimization, effective communication skills, technical collaboration abilities, self-motivation, and adaptability to a dynamic, fast-paced environment. Joining Sony Pictures Networks offers the opportunity to work with leading entertainment channels and the promising streaming platform Sony LIV, contributing to a digitally-led content powerhouse. If you are passionate about innovation and ready to make a meaningful impact in the digital entertainment industry, this role could be the next step in your career journey.,

Posted 5 days ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

As the DB expert for NoSQL databases, you will be responsible for staying updated with the latest developments in the NoSQL space and reviewing data models created by application teams. Your primary focus will be on managing Cassandra and Scylla clusters, ensuring capacity management, cost optimization, high availability, and performance. You will work with large database clusters handling 1M+ IOPS in aggregate, creating automations to reduce toil, monitoring, and alerting. Your responsibilities will include setting up backup and restore mechanisms, troubleshooting and resolving various cluster-related issues, and being the on-call support engineer on a rotational basis. You should hold a Bachelor's/Master's degree in engineering from reputed institutions and have at least 5 years of experience in SQL and NoSQL databases. To be successful in this role, you must have experience managing large-scale Cassandra and Scylla clusters with a deep understanding of architecture, storage, replication, schema design, system tables, logs, DB processes, tools, and CQL. You should also be proficient in installation, configuration, upgrades, OS patching, certificate management, scaling for Cassandra and Scylla clusters, and setting up backup and restore mechanisms with short RTO and RPO objectives. Furthermore, experience in infrastructure automation and scripting using Terraform, Python, or Bash is required. Familiarity with monitoring tools like Grafana, Prometheus, New Relic, Datadog, and managed Cassandra solutions such as InstaClustr and Datastax Astra is beneficial. Experience with Cloud-native distributed databases like TiDB and CockroachDB, as well as MySQL and/or Postgres on Linux, is a plus. AWS experience, preferably with AWS certification, and excellent communication skills are essential for this role.,

Posted 5 days ago

Apply

1.0 - 5.0 years

0 Lacs

udaipur, rajasthan

On-site

You will be joining YoCharge, an Electric Vehicle Charging & Energy Management SaaS startup that supports Charge Point Operators in efficiently launching, operating, and expanding their EV charging business. YoCharge operates globally in over 20 countries and is seeking enthusiastic team members who are passionate about developing innovative products in the EV & Energy sector. As a Backend Engineer (SDE-I) at YoCharge, located in Udaipur, you will play a key role in scaling YoCharge's back-end platform and services to facilitate smart charging for electric vehicles. To qualify for this position, you should hold a Bachelor's degree in Computer Science or a related field and have proven experience as a Backend Developer or in a similar role. Proficiency in Python with Django and FastAPI frameworks is essential, along with previous experience in scaling a product. Additionally, knowledge of database systems (SQL and NoSQL), ORM frameworks, cloud platforms (e.g., AWS, Azure), containerization technologies (e.g., Docker, Kubernetes), deployment pipelines, CI/CD, DevOps practices, micro-services architecture, and asynchronous programming is required. You should also be adept at designing and developing RESTful APIs, understanding systems architecture for scalability, utilizing monitoring and logging tools (e.g., Prometheus, Grafana, ELK Stack), and possess excellent communication and teamwork skills. The ability to thrive in a fast-paced environment, meet deadlines, and have 1-2+ years of experience are also necessary. If you have a passion for Electric Vehicles & Energy, enjoy building products, and are excited about working in a startup environment, you might be an instant match for this role at YoCharge.,

Posted 5 days ago

Apply

5.0 - 10.0 years

0 Lacs

hyderabad, telangana

On-site

You should have at least 10 years of overall development experience with at least 5 years in a Data Engineering role. Your responsibilities will include building and optimizing big data pipelines, architectures, and data sets. You should have a strong background in writing SQL statements and experience in Spring/Spring Boot framework. Additionally, you should have experience in relational databases like Postgres, Oracle, SnowFlake, BigQuery, and other cloud databases. Experience in implementing web services such as SOAP and RESTful web services is required. Knowledge of frontend frameworks like Angular, jQuery, and Bootstrap is also expected. You should be familiar with real-time and batch data processing, ETL frameworks like Google Cloud Data platform or Apache Beam, and analyzing data to derive insights. Leading small to midsize technical teams, customer-facing experience, and managing deliverables are also part of the role. Good verbal and written communication skills are essential, along with an advanced understanding of modern software development methodologies and software testing methodologies, scripting, and tools. You should have a minimum of three or more full SDLC experiences for web application projects. Experience in Agile development environments and messaging platforms like ActiveMQ would be a plus.,

Posted 5 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. About this Job: Supports and performs the development and programming of machine learning integrated software algorithms to structure, analyze, and leverage data in a production environment. Responsibilities Leverages data pipeline designs and supports the development of data pipelines to support model development. Proficient with software tools that develop data pipelines in a distributed computing environment (PySprak, GlueETL). Supports integration of model pipelines in a production environment. Develops understanding of SDLC for model production. Reviews pipeline designs, makes data model design changes as needed. Documents and reviews design changes with data science teams. Supports data discovery & automated ingestion for model development. Performs detailed analysis of raw data sources for data quality, applies business context, and model development needs. Engages with internal stakeholders to understand and probe business processes in order to develop hypotheses. Brings structure to requests and translates requirements into an analytic approach. Participates in and influences ongoing business planning and departmental prioritization activities. Runs model monitoring scripts, follows process for alerts to management as needed. Addresses issues found in data pipelines from model monitoring alerts. Participates in special projects and performs other duties as assigned. Qualifications Undergraduate degree or equivalent combination of training and experience. Minimum of five years related work experience. Python, PySpark, SQL, NoSQL DB, AWS Cloud Tech Stack (Glue, SM etc.) EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 5 days ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Description: Manager AI Engineer - Data Scientist Role Overview:- We are seeking an accomplished and visionary Manager Data Scientist with minimum 8 Years of experience in Data Science and Machine learning, preferable experience around NLP, Generative AI, LLMs, MLOps, Optimization techniques and AI solution Architecture to lead our AI team and drive the strategic direction of AI initiatives. In this role you will play a key role in the development and implementation of AI solutions, leveraging your technical expertise and leadership skills. The ideal candidate should have a proven track record in AI leadership, a deep understanding of AI technologies, and experience in designing and implementing cutting-edge AI models and systems. Additionally, expertise in data engineering, DevOps, and MLOps practices will be valuable in this role. Minimum 8 Years of experience in Data Science and Machine learning. Excellent leadership skills with at least 2-3 years of people management OR technical architecture experience. Responsibilities: Your technical responsibilities: Provide strategic direction and technical leadership for AI initiatives, guiding the team in designing and implementing state-of-the-art AI solutions. Lead the design and architecture of complex AI systems, ensuring scalability, reliability, and performance. Drive the development and implementation of AI models and systems, leveraging techniques such as Language Models (LLMs) and generative AI. Collaborate with stakeholders to identify business opportunities, define AI project goals, and prioritize initiatives based on strategic objectives. Stay updated with the latest advancements in generative AI techniques, such as LLMs, and evaluate their potential applications in solving enterprise challenges. Utilize generative AI techniques, such as LLMs, Agentic Framework to develop innovative solutions for enterprise industry use cases. Integrate with relevant APIs and libraries, such as Azure Open AI GPT models and Hugging Face Transformers, to leverage pre-trained models and enhance generative AI capabilities. Implement and optimize end-to-end pipelines for generative AI projects, ensuring seamless data processing and model deployment. Utilize vector databases, such as Redis, and NoSQL databases to efficiently handle large-scale generative AI datasets and outputs. Implement similarity search algorithms and techniques to enable efficient and accurate retrieval of relevant information from generative AI outputs. Collaborate with domain experts, stakeholders, and clients to understand specific business requirements and tailor generative AI solutions accordingly. Conduct research and evaluation of advanced AI techniques, including transfer learning, domain adaptation, and model compression, to enhance performance and efficiency. Establish evaluation metrics and methodologies to assess the quality, coherence, and relevance of generative AI outputs for enterprise industry use cases. Ensure compliance with data privacy, security, and ethical considerations in AI applications. Leverage data engineering skills to curate, clean, and preprocess large-scale datasets for generative AI applications. Good to Have Skills : Apply trusted AI practices to ensure fairness, transparency, and accountability in AI models and systems. Experience on Optimization tools and techniques (MIP etc). Drive DevOps and MLOps practices, including continuous integration, deployment, and monitoring of AI models. Implement CI/CD pipelines and automate model deployment and scaling processes. Utilize tools such as Docker, Kubernetes, and Git for building and managing AI pipelines. Apply infrastructure as code (IaC) principles using tools like Terraform or CloudFormation. Implement monitoring and logging tools to ensure the performance and reliability of deployed AI models. Collaborate with software engineering and operations teams to ensure seamless integration and deployment of AI models. Your client responsibilities: Work for managing the successful design, execution, and measurement of data initiatives across customer-facing engagements Communicate with internal stakeholders to make recommendations based on data Sort out business problems to translate into analytical questions to simplify and accelerate the solution development. Balancing excellent business communication skills with a deep analytical understanding is needed Run Scrum calls for team. Manage client delivery. Applying data Science, ML algorithms, using standard statistical tools and techniques for solving client business problems. Communicate and manage relationships with the onsite Program Manager. Regular status reporting to Management and onsite coordinators. Advocate for GDS work, work on innovative work/PoC’s and showcase to Onsite stakeholders to convince them to get more business. Interface with the customer representatives as and when needed Willing to travel to the customer’s locations on need basis within India and outside India. Willing to be flexible to work on various tools and technologies based on demand Your people responsibilities: Building a quality culture Lead by example Participating in the organization-wide people initiatives Requirements: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. A Ph.D. is a plus. Proven experience in leading and managing AI projects and teams, with a focus on generative AI and LLMs. In-depth knowledge of machine learning, deep learning, and generative AI techniques. Proficiency in programming languages such as Python, R and frameworks like TensorFlow or PyTorch. Strong understanding of NLP techniques and frameworks such as BERT, GPT, or Transformer models. Familiarity with computer vision techniques for image recognition, object detection, or image generation. Experience with cloud platforms such as Azure, AWS, or GCP and deploying AI solutions in a cloud environment. Expertise in data engineering, including data curation, cleaning, and preprocessing. Knowledge of trusted AI practices, ensuring fairness, transparency, and accountability in AI models and systems. Experience in DevOps and MLOps practices, including continuous integration, deployment, and monitoring of AI models. Familiarity with tools such as Docker, Kubernetes, and Git for building and managing AI pipelines. Proficiency in implementing CI/CD pipelines and automating model deployment and scaling processes. Understanding of infrastructure as code (IaC) principles and experience with tools like Terraform or CloudFormation. Knowledge of monitoring and logging tools to ensure the performance and reliability of deployed AI models. Strong collaboration with software engineering and operations teams to ensure seamless integration and deployment of AI models. Excellent problem-solving and analytical skills, with the ability to translate business requirements into technical solutions. Strong communication and interpersonal skills, with the ability to collaborate effectively with stakeholders at various levels. Understanding of data privacy, security, and ethical considerations in AI applications. Track record of driving innovation and staying updated with the latest AI research and advancements. Ability to think strategically, identify business opportunities, and align AI initiatives with organizational objectives. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 5 days ago

Apply

1.0 - 5.0 years

0 Lacs

hyderabad, telangana

On-site

As a Software Engineer II - AI at Celigo, you will play a crucial role in driving Celigo's internal AI/ML initiatives and enhancing the integration platform with advanced AI capabilities. With 1-4 years of experience, you will collaborate with a skilled team to implement cutting-edge AI solutions, streamline business processes, and shape the future of cloud integrations. Your responsibilities will include evaluating, implementing, and deploying leading AI/ML frameworks such as OpenAI, LangChain, Pinecone, Spacy, and Hugging Face. You will develop and refine natural language processing (NLP) models on the OpenAI platform tailored to specific business use cases. Additionally, you will apply machine learning techniques to analyze and interpret data effectively. In terms of backend engineering, you will architect, implement, and deploy Python microservices on AWS using containers/Kubernetes, delivered via a fully automated CI/CD pipeline. Collaboration is key in this role, as you will partner with software engineers to integrate AI/ML capabilities into products, ensuring seamless functionality and an exceptional user experience. You will also work closely with product managers and business stakeholders to translate requirements into innovative AI solutions. Ensuring security and best practices is a critical aspect of your role. You will implement robust safeguards to protect user data security and privacy while keeping up with industry best practices. To be successful in this role, you should have 1-4 years of experience in software product development with exposure to AI/ML, NLP, data science, or deep learning initiatives. Proficiency in Python and comfort with Node.js are essential. Experience in building and supporting multi-tenant SaaS applications at scale is preferred. A strong foundation in computer science fundamentals, including data structures, algorithms, and software design, will be beneficial. Moreover, having a postgraduate degree or equivalent experience with a proven track record in research or practical AI/ML projects is a plus. Experience in developing AI/ML models in production environments and integrating them into enterprise applications using cloud-native or hybrid technologies is desirable. Solid knowledge of both SQL and NoSQL databases will be an added advantage. At Celigo, you will have the opportunity to tackle complex integration challenges, work with the latest AI technologies, and thrive in a culture of teamwork, creativity, and continuous learning. As an equal opportunity employer, Celigo is committed to creating a diverse and inclusive environment where all backgrounds are welcome. Enjoy a healthy work-life balance, comprehensive benefits, and a supportive community that values innovation and excellence.,

Posted 5 days ago

Apply

0.0 - 1.0 years

0 - 1 Lacs

Mohali

On-site

We are looking for a passionate and enthusiastic Java Spring Boot Backend Developer (Fresher) to join our development team. The ideal candidate should have a strong foundation in Core Java and be familiar with building RESTful APIs using Spring Boot. This is a great opportunity to work on backend systems and microservices in a collaborative, learning-focused environment. Key Responsibilities: Assist in the design, development, and maintenance of backend services using Java and Spring Boot Build secure, scalable, and performant RESTful APIs Work with databases (SQL/NoSQL) for data storage and retrieval Participate in code reviews, debugging, and documentation Learn and follow best practices for clean code, security, and testing Required Skills: Strong understanding of Core Java (OOP, Collections, Multithreading, etc.) Basic knowledge of Spring Boot framework and REST APIs Familiarity with SQL and relational databases like MySQL Understanding of HTTP, JSON, and API integration Version control using Git Good problem-solving and analytical skills

Posted 5 days ago

Apply

0.0 - 4.0 years

0 Lacs

bhavnagar, gujarat

On-site

We are seeking a skilled PHP developer to oversee our back-end services and ensure smooth data exchange between the server and our users. As a PHP developer, your primary responsibility will involve crafting and implementing all server-side logic. Additionally, you will be tasked with managing the central database and addressing requests from front-end developers. Your core responsibilities will include analyzing website and application requirements, writing efficient back-end code, constructing back-end portals with an optimized database, troubleshooting application and code issues, and responding to integration requests from front-end developers. Knowledge of Laravel, Node.js, or React will be advantageous, along with familiarity with REST and GraphQL APIs. You will finalize back-end features, conduct testing on web applications, and make necessary updates to enhance performance. The ideal candidate should possess a Bachelor's degree in computer science or a related field, expertise in PHP web frameworks like Laravel and CodeIgniter, proficiency in front-end technologies such as CSS3, JavaScript, and HTML5, a solid grasp of object-oriented PHP programming, and prior experience in developing scalable applications. Proficiency in code versioning tools like Git, Mercurial, CVS, and SVN, familiarity with SQL/NoSQL databases, project management abilities, and strong problem-solving skills are also essential attributes. This is a full-time position suited for both experienced professionals and freshers. Benefits include leave encashment and paid sick time. The work location is on-site.,

Posted 5 days ago

Apply

7.0 - 11.0 years

0 Lacs

pune, maharashtra

On-site

As a Senior DevOps Engineer at ClearRoute, you will play a key role in designing and implementing cloud infrastructure, automation, and CI/CD processes. With over 7 years of experience in cloud and DevOps engineering roles, you will leverage your expertise in AWS services such as EC2, Lambda, S3, CloudTrail, CloudWatch, EventBridge, SNS, and SQS to build secure and scalable infrastructure across diverse environments. Your proficiency in tools like Terraform, CloudFormation, Ansible, Python scripting, and GitOps workflows will be essential in enhancing automation and optimizing system performance. In this role, you will have hands-on experience with Kubernetes (EKS), Helm, and containerization best practices, enabling you to design robust cloud-native architectures. You will also be responsible for building CI/CD pipelines, implementing source control using Git, and ensuring the smooth operation of SQL/NoSQL databases in both Linux and Windows environments. Your familiarity with network security practices will be crucial in maintaining the integrity of our infrastructure. As a member of our collaborative and dedicated team, you will have the opportunity to drive change, transform organizations, and tackle complex problem domains. Your strong troubleshooting, analytical, and communication skills will be invaluable in addressing challenges and fostering innovation within our organization. Join us at ClearRoute and be a part of building a better future through technology.,

Posted 5 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies