Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) – Senior – Senior Data Scientist Role Overview: We are seeking a highly skilled and experienced Senior Data Scientist with a minimum of 3 - 7 years of experience in Data Science and Machine Learning, preferably with experience in NLP, Generative AI, LLMs, MLOps, Optimization techniques, and AI solution Architecture. In this role, you will play a key role in the development and implementation of AI solutions, leveraging your technical expertise. The ideal candidate should have a deep understanding of AI technologies and experience in designing and implementing cutting-edge AI models and systems. Additionally, expertise in data engineering, DevOps, and MLOps practices will be valuable in this role. Responsibilities: Your technical responsibilities: Contribute to the design and implementation of state-of-the-art AI solutions. Assist in the development and implementation of AI models and systems, leveraging techniques such as Language Models (LLMs) and generative AI. Collaborate with stakeholders to identify business opportunities and define AI project goals. Stay updated with the latest advancements in generative AI techniques, such as LLMs, and evaluate their potential applications in solving enterprise challenges. Utilize generative AI techniques, such as LLMs, to develop innovative solutions for enterprise industry use cases. Integrate with relevant APIs and libraries, such as Azure Open AI GPT models and Hugging Face Transformers, to leverage pre-trained models and enhance generative AI capabilities. Implement and optimize end-to-end pipelines for generative AI projects, ensuring seamless data processing and model deployment. Utilize vector databases, such as Redis, and NoSQL databases to efficiently handle large-scale generative AI datasets and outputs. Implement similarity search algorithms and techniques to enable efficient and accurate retrieval of relevant information from generative AI outputs. Collaborate with domain experts, stakeholders, and clients to understand specific business requirements and tailor generative AI solutions accordingly. Conduct research and evaluation of advanced AI techniques, including transfer learning, domain adaptation, and model compression, to enhance performance and efficiency. Establish evaluation metrics and methodologies to assess the quality, coherence, and relevance of generative AI outputs for enterprise industry use cases. Ensure compliance with data privacy, security, and ethical considerations in AI applications. Leverage data engineering skills to curate, clean, and preprocess large-scale datasets for generative AI applications. Requirements: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. A Ph.D. is a plus. Minimum 3-7 years of experience in Data Science and Machine Learning. In-depth knowledge of machine learning, deep learning, and generative AI techniques. Proficiency in programming languages such as Python, R, and frameworks like TensorFlow or PyTorch. Strong understanding of NLP techniques and frameworks such as BERT, GPT, or Transformer models. Familiarity with computer vision techniques for image recognition, object detection, or image generation. Experience with cloud platforms such as Azure, AWS, or GCP and deploying AI solutions in a cloud environment. Expertise in data engineering, including data curation, cleaning, and preprocessing. Knowledge of trusted AI practices, ensuring fairness, transparency, and accountability in AI models and systems. Strong collaboration with software engineering and operations teams to ensure seamless integration and deployment of AI models. Excellent problem-solving and analytical skills, with the ability to translate business requirements into technical solutions. Strong communication and interpersonal skills, with the ability to collaborate effectively with stakeholders at various levels. Understanding of data privacy, security, and ethical considerations in AI applications. Track record of driving innovation and staying updated with the latest AI research and advancements. Good to Have Skills: Apply trusted AI practices to ensure fairness, transparency, and accountability in AI models and systems. Utilize optimization tools and techniques, including MIP (Mixed Integer Programming). Drive DevOps and MLOps practices, covering continuous integration, deployment, and monitoring of AI models. Implement CI/CD pipelines for streamlined model deployment and scaling processes. Utilize tools such as Docker, Kubernetes, and Git to build and manage AI pipelines. Apply infrastructure as code (IaC) principles, employing tools like Terraform or CloudFormation. Implement monitoring and logging tools to ensure AI model performance and reliability. Collaborate seamlessly with software engineering and operations teams for efficient AI model integration and deployment. Familiarity with DevOps and MLOps practices, including continuous integration, deployment, and monitoring of AI models. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Description : EY GDS – Data and Analytics (D&A) – Senior – Senior Data Scientist Role Overview: We are seeking a highly skilled and experienced Senior Data Scientist with a minimum of 3 - 7 years of experience in Data Science and Machine Learning, preferably with experience in NLP, Generative AI, LLMs, MLOps, Optimization techniques, and AI solution Architecture. In this role, you will play a key role in the development and implementation of AI solutions, leveraging your technical expertise. The ideal candidate should have a deep understanding of AI technologies and experience in designing and implementing cutting-edge AI models and systems. Additionally, expertise in data engineering, DevOps, and MLOps practices will be valuable in this role. Responsibilities: Your technical responsibilities: Contribute to the design and implementation of state-of-the-art AI solutions. Assist in the development and implementation of AI models and systems, leveraging techniques such as Language Models (LLMs) and generative AI. Collaborate with stakeholders to identify business opportunities and define AI project goals. Stay updated with the latest advancements in generative AI techniques, such as LLMs, and evaluate their potential applications in solving enterprise challenges. Utilize generative AI techniques, such as LLMs, to develop innovative solutions for enterprise industry use cases. Integrate with relevant APIs and libraries, such as Azure Open AI GPT models and Hugging Face Transformers, to leverage pre-trained models and enhance generative AI capabilities. Implement and optimize end-to-end pipelines for generative AI projects, ensuring seamless data processing and model deployment. Utilize vector databases, such as Redis, and NoSQL databases to efficiently handle large-scale generative AI datasets and outputs. Implement similarity search algorithms and techniques to enable efficient and accurate retrieval of relevant information from generative AI outputs. Collaborate with domain experts, stakeholders, and clients to understand specific business requirements and tailor generative AI solutions accordingly. Conduct research and evaluation of advanced AI techniques, including transfer learning, domain adaptation, and model compression, to enhance performance and efficiency. Establish evaluation metrics and methodologies to assess the quality, coherence, and relevance of generative AI outputs for enterprise industry use cases. Ensure compliance with data privacy, security, and ethical considerations in AI applications. Leverage data engineering skills to curate, clean, and preprocess large-scale datasets for generative AI applications. Requirements: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. A Ph.D. is a plus. Minimum 3-7 years of experience in Data Science and Machine Learning. In-depth knowledge of machine learning, deep learning, and generative AI techniques. Proficiency in programming languages such as Python, R, and frameworks like TensorFlow or PyTorch. Strong understanding of NLP techniques and frameworks such as BERT, GPT, or Transformer models. Familiarity with computer vision techniques for image recognition, object detection, or image generation. Experience with cloud platforms such as Azure, AWS, or GCP and deploying AI solutions in a cloud environment. Expertise in data engineering, including data curation, cleaning, and preprocessing. Knowledge of trusted AI practices, ensuring fairness, transparency, and accountability in AI models and systems. Strong collaboration with software engineering and operations teams to ensure seamless integration and deployment of AI models. Excellent problem-solving and analytical skills, with the ability to translate business requirements into technical solutions. Strong communication and interpersonal skills, with the ability to collaborate effectively with stakeholders at various levels. Understanding of data privacy, security, and ethical considerations in AI applications. Track record of driving innovation and staying updated with the latest AI research and advancements. Good to Have Skills: Apply trusted AI practices to ensure fairness, transparency, and accountability in AI models and systems. Utilize optimization tools and techniques, including MIP (Mixed Integer Programming). Drive DevOps and MLOps practices, covering continuous integration, deployment, and monitoring of AI models. Implement CI/CD pipelines for streamlined model deployment and scaling processes. Utilize tools such as Docker, Kubernetes, and Git to build and manage AI pipelines. Apply infrastructure as code (IaC) principles, employing tools like Terraform or CloudFormation. Implement monitoring and logging tools to ensure AI model performance and reliability. Collaborate seamlessly with software engineering and operations teams for efficient AI model integration and deployment. Familiarity with DevOps and MLOps practices, including continuous integration, deployment, and monitoring of AI models. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 week ago
8.0 years
0 Lacs
Hyderābād
On-site
Req ID: 331368 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Systems Integration Advisor to join our team in Hyderabad, Telangana (IN-TG), India (IN). NTT DATA Services currently seeks Java Full stack developer to join our team in Hyderabad Required Qualifications: 8+ years of Software Engineering experience including hands-on experience with application development using Java and distributed technologies both on-premises and cloud. Strong in Java/JEE, Spring framework, JavaScript, RESTful web services Sound knowledge of UI frameworks like Strong understanding of microservices and associated design patterns Experience with latest unit testing tools including Junit Experience with best-in-class version control tools like GitHub Experience with build tools like Maven or Gradle Working knowledge of both SQL and noSQL databases Knowledge on messaging systems like MQ, Solace, Kafka Experience in identifying and remediating security vulnerabilities Should be well versed with test driven development and be knowledgeable on associated tools and practices Experience with working with globally distributed teams in working in Agile scrums Strong verbal and written communication skills Desired Qualifications: Domain knowledge in home lending or consumer lending space. Well versed in DevOps concepts Aware of cloud native application development best practices and design patterns. #LI-PAS About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us. This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here. If you'd like more information on your EEO rights under the law, please click here. For Pay Transparency information, please click here.
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Description: EY GDS – Data and Analytics (D&A) – Senior – Senior Data Scientist Role Overview: We are seeking a highly skilled and experienced Senior Data Scientist with a minimum of 3 - 7 years of experience in Data Science and Machine Learning, preferably with experience in NLP, Generative AI, LLMs, MLOps, Optimization techniques, and AI solution Architecture. In this role, you will play a key role in the development and implementation of AI solutions, leveraging your technical expertise. The ideal candidate should have a deep understanding of AI technologies and experience in designing and implementing cutting-edge AI models and systems. Additionally, expertise in data engineering, DevOps, and MLOps practices will be valuable in this role. Responsibilities: Your technical responsibilities: Contribute to the design and implementation of state-of-the-art AI solutions. Assist in the development and implementation of AI models and systems, leveraging techniques such as Language Models (LLMs) and generative AI. Collaborate with stakeholders to identify business opportunities and define AI project goals. Stay updated with the latest advancements in generative AI techniques, such as LLMs, and evaluate their potential applications in solving enterprise challenges. Utilize generative AI techniques, such as LLMs, to develop innovative solutions for enterprise industry use cases. Integrate with relevant APIs and libraries, such as Azure Open AI GPT models and Hugging Face Transformers, to leverage pre-trained models and enhance generative AI capabilities. Implement and optimize end-to-end pipelines for generative AI projects, ensuring seamless data processing and model deployment. Utilize vector databases, such as Redis, and NoSQL databases to efficiently handle large-scale generative AI datasets and outputs. Implement similarity search algorithms and techniques to enable efficient and accurate retrieval of relevant information from generative AI outputs. Collaborate with domain experts, stakeholders, and clients to understand specific business requirements and tailor generative AI solutions accordingly. Conduct research and evaluation of advanced AI techniques, including transfer learning, domain adaptation, and model compression, to enhance performance and efficiency. Establish evaluation metrics and methodologies to assess the quality, coherence, and relevance of generative AI outputs for enterprise industry use cases. Ensure compliance with data privacy, security, and ethical considerations in AI applications. Leverage data engineering skills to curate, clean, and preprocess large-scale datasets for generative AI applications. Requirements: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. A Ph.D. is a plus. Minimum 3-7 years of experience in Data Science and Machine Learning. In-depth knowledge of machine learning, deep learning, and generative AI techniques. Proficiency in programming languages such as Python, R, and frameworks like TensorFlow or PyTorch. Strong understanding of NLP techniques and frameworks such as BERT, GPT, or Transformer models. Familiarity with computer vision techniques for image recognition, object detection, or image generation. Experience with cloud platforms such as Azure, AWS, or GCP and deploying AI solutions in a cloud environment. Expertise in data engineering, including data curation, cleaning, and preprocessing. Knowledge of trusted AI practices, ensuring fairness, transparency, and accountability in AI models and systems. Strong collaboration with software engineering and operations teams to ensure seamless integration and deployment of AI models. Excellent problem-solving and analytical skills, with the ability to translate business requirements into technical solutions. Strong communication and interpersonal skills, with the ability to collaborate effectively with stakeholders at various levels. Understanding of data privacy, security, and ethical considerations in AI applications. Track record of driving innovation and staying updated with the latest AI research and advancements. Good to Have Skills: Apply trusted AI practices to ensure fairness, transparency, and accountability in AI models and systems. Utilize optimization tools and techniques, including MIP (Mixed Integer Programming). Drive DevOps and MLOps practices, covering continuous integration, deployment, and monitoring of AI models. Implement CI/CD pipelines for streamlined model deployment and scaling processes. Utilize tools such as Docker, Kubernetes, and Git to build and manage AI pipelines. Apply infrastructure as code (IaC) principles, employing tools like Terraform or CloudFormation. Implement monitoring and logging tools to ensure AI model performance and reliability. Collaborate seamlessly with software engineering and operations teams for efficient AI model integration and deployment. Familiarity with DevOps and MLOps practices, including continuous integration, deployment, and monitoring of AI models. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 week ago
4.5 years
5 - 9 Lacs
Gurgaon
On-site
Know your role in Deloitte About Deloitte “Deloitte” is the brand under which tens of thousands of dedicated professionals in independent firms throughout the world collaborate to provide audit, consulting, financial advisory, risk management, and tax services to selected clients. These firms are members of Deloitte Touche Tohmatsu Limited (DTTL), a UK private company limited by guarantee. Each member firm provides services in a particular geographic area and is subject to the laws and professional regulations of the particular country or countries in which it operates. DTTL and each DTTL member firm are separate and distinct legal entities. Each DTTL member firm is structured differently in accordance with national laws, regulations, customary practice, and other factors and may secure the provision of professional services in their territories through subsidiaries, affiliates, and/or other entities. In the United States, Deloitte LLP is the member firm of DTTL. Services are primarily provided by the subsidiaries of Deloitte LLP, including: Deloitte & Touche LLP Deloitte Consulting LLP Deloitte Financial Advisory Services LLP Deloitte Tax LLP In India, Deloitte LLP has the following indirect subsidiaries: Deloitte & Touche Assurance & Enterprise Risk Services India Private Limited, Deloitte Consulting India Private Limited, Deloitte Financial Advisory Services India Private Limited, Deloitte Tax Services India Private Limited, and Deloitte Support Services India Private Limited. These entities primarily render services to their respective U.S.-based parents. U.S. India Deloitte Tax Services India Private Limited Deloitte Tax Services India Private Limited (“Deloitte Tax Services India”) commenced operations in January 2002. Since then, nearly all of the Deloitte Tax LLP (“Deloitte Tax”) U.S. service lines and region have developed their affiliations in India. Deloitte Tax offers you immense opportunities to learn and practice U.S. Taxation, a much sought after career option. Our Vision : To be the dominant global provider of tax services delivering unmatched value to our clients and our people through sustained relationships. U.S. India Tax — TTO Deloitte Tax LLP’s Tax Transformation (TTO) team is responsible for the design, development and deployment of tax tools for the US Tax Practice. The professionals on the TTO team are focused on assisting Deloitte Tax in its efforts to deliver quality, comprehensive, value-added, and efficient client services using tax tools to our clients. The team consults and executes on a wide range of initiatives involving tax technology management, tool development and implementation. Job description Function Deloitte Tax Services India Private Limited Service line TAX TTO Job level Consultant/Senior Consultant .NET Angular Fullstack Developer Specific skill set required C#, .NET & .NET Core, ASP.Net Core, SQL Server, OOP Concepts, ASP.NET Web API, Entity Framework 6 or above, Azure, Microservices architecture, Mongo DB, Database performance tuning, Applying Design Patterns, VSTS / Azure DevOps, Agile with Angular Graduation BE/B.Tech., M.C.A., M.Sc. Comp Sc., M.Tech. Professional qualification Work experience 4.5-9 years The key job responsibilities include the following: Participate in requirements analysis. Collaborate with US and Vendors’ teams to produce software design and architecture. Write clean, scalable code using .NET programming languages. Test and deploy applications and systems. Revise, update, refactor and debug code. Develop, support and maintain applications and technology solutions. Ensure that all development efforts meet or exceed client expectations. Applications should meet requirements of scope, functionality, and time and adhere to all defined and agreed upon standards. Become familiar with all development tools, testing tools, methodologies and processes. Become familiar with the project management methodology and processes. Encourage collaborative efforts and camaraderie with on-shore and off-shore team members. Demonstrate a strong working understanding of the industry best standards in software development and version controlling. Ensure the quality and low bug rates of code released into production. Work on agile projects, participate in daily SCRUM calls and provide task updates. During design and key development phases, might need to work a staggered shift from 2pm-11pm to ensure appropriate overlap of the India and US teams and project deliveries. Key skills required: Strong hands-on experience on C#, SQL Server, OOPS Concepts, Micro Services Architecture. At least one year hands-on experience on .NET Core, ASP.NET Core Web API, SQL, NoSQL, Entity Framework 6 or above, Azure, Database performance tuning, Applying Design Patterns, Agile. At least 2 years Hands-on experience on Angular 10+ Hands on experience over consuming web API using Angular(FE, BE Integration). Skill for writing reusable libraries. Excellent Communication skills both oral & written. Excellent troubleshooting and communication skills, ability to communicate clearly with US counterparts. Additional Information/Nice to Have: Mongo DB, NPM and Azure Devops Build/Release configuration. Self-starter with solid analytical and problem-solving skills. Willingness to work extra hours to meet deliverables. #CA-TG #CA-HPN Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India. Benefits to help you thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 308159
Posted 1 week ago
5.0 years
8 Lacs
Gurgaon
On-site
Sr. Software Engineer (SDE-2) - Node Pluang Technologies India Private Limited, Gurgaon, Haryana, India Department Back End - Local Products Job posted on Apr 18, 2025 Employment type Full Time About us Pluang is Indonesia’s leading multi-asset investment platform, offering products such as Crypto, US stocks, mutual funds, and gold. At Pluang, we're on a mission to redefine the way people invest. As one of the fastest-growing fintech platforms in Indonesia, we empower users to achieve financial independence through a seamless, innovative, and secure investment experience.Explore more about us here! We’re looking for an SDE-3 Node - IDSS to lead the development of scalable microservices for our platform. You’ll solve complex engineering challenges, optimize system architecture, and collaborate with cross-functional teams to build innovative solutions that drive business growth. What You Will Do Design and build scalable microservices using Node.js for the IDSS platform. Lead code reviews and shape team processes to ensure high-quality, efficient development. Own product lifecycle, from requirements to production, making a direct impact on business outcomes. Mentor and guide junior developers, driving innovation and technical excellence. What We're Looking For 5+ years of Node.js experience, with strong skills in microservices, RESTful APIs, and cloud platforms (AWS/GCP). Expertise in Express, SQL/NoSQL/Redis databases, and message brokers (Kafka, RabbitMQ). Strong understanding of data structures, algorithms, and system design. Proven leadership skills in mentoring teams and driving successful project delivery. We are an equal-opportunity employer and value diversity at our company. We do not discriminate based on race, religion, colour, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. Want the inside scoop on our culture, the interview process, and the amazing team at Pluang? Click here to find out! Curious about what it's like to work with us? Get a glimpse of life at Pluang through the eyes of our team! Check out our Instagram!
Posted 1 week ago
0 years
8 Lacs
Gurgaon
On-site
Sr. Software Engineer (SDE-2) - DevOps Pluang Technologies India Private Limited, Gurgaon, Haryana, India Department Devops & Security Job posted on Aug 05, 2025 Employment type Full Time About us Pluang is Indonesia’s leading multi-asset investment platform, offering products such as Crypto, US stocks, mutual funds, and gold. At Pluang, we're on a mission to redefine the way people invest. As one of the fastest-growing fintech platforms in Indonesia, we empower users to achieve financial independence through a seamless, innovative, and secure investment experience.Explore more about us here! We’re looking for a passionate Sr. Software Engineer (SDE-2) - DevOps to optimize our cloud infrastructure and Service Mesh solutions. You’ll solve operational challenges, drive automation at scale, and collaborate to build secure, scalable systems. Expertise in cloud platforms, containerization, IaC, monitoring tools, and incident management will be essential for ensuring system reliability and supporting rapid growth. What You Will Do Automate and manage scalable, reliable infrastructure using AWS, GCP, Service Mesh (Istio, Linkerd, Kong), Ansible, Bash, Python, and Linux. Optimize system performance, troubleshoot issues, and collaborate with development teams to enhance infrastructure utilization, reduce costs, and ensure system reliability. Design and implement robust CI/CD pipelines using Jenkins to enable efficient and reliable deployments. Lead incident management efforts, develop response plans, and continuously improve operational processes to support Pluang’s growth. Develop and maintain infrastructure-as-code (IaC) practices with Terraform for efficient cloud resource provisioning and management. What We're Looking For Expertise in cloud platforms (AWS, GCP), Service Mesh (Istio, Linkerd, Kong), Docker, Kubernetes, and CI/CD tools (Jenkins). Proficiency in monitoring and logging tools like ELK, Prometheus, and Grafana for end-to-end observability of critical APIs and workloads.. Solid experience with Terraform (IaC), Python for automation, and troubleshooting to optimize infrastructure and applications. Knowledge of RDBMS (PostgreSQL) and NoSQL (MongoDB) databases for performance and reliability. Experience in designing scalable and cost-optimized systems with secure, seamless service integrations. Familiar with microservices architecture and API management concepts. Fintech experience is a plus. We are an equal-opportunity employer and value diversity at our company. We do not discriminate based on race, religion, colour, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. Want the inside scoop on our culture, the interview process, and the amazing team at Pluang? Click here to find out! Curious about what it's like to work with us? Get a glimpse of life at Pluang through the eyes of our team! Check out our Instagram!
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role Description Role Proficiency: Ensuring adherence to test practices and process to improve test coverage Outcomes: Create Test Estimates and Schedules Identify business processes conduct risk analysis and ensure test coverage Ensure adherence of processes and standards Produce test results defect reports test logs and reports for evidence of testing Publish RCA reports and preventive measures Report progress of testing Contribute for Revenue savings for client by suggesting alternate method Quality of Deliverables Measures Of Outcomes: Test Script Creation and Execution Productivity Defect Leakage Metrics (% of defect leaked % of UAT defects and % of Production defects) % of Test case reuse Test execution Coverage Defect Acceptance Ratio Test Review efficiency Outputs Expected: Test Design Development Execution: Participate in review walkthrough demo and obtain sign off by stakeholder for Test Design Prepare Test summary report for modules/features Requirements Management: Analyse Prioritize Identify Gaps and create workflow diagrams based on Requirements/User stories Manage Project: Participate in Test management Domain Relevance: Identify business processes conduct risk analysis and ensure test coverage Estimate: Prepare Estimate Schedule Identify dependencies Knowledge Management: Consume Contribute Review (Best Practices Lesson learned Retrospective) Test Design Execution: Test Plan preparation Test Case/Script Creation Test Execution Test & Defect Management: Conduct root cause and trend analysis of the defects Test Planning: Identify the test scenarios with understanding of systems interfaces and application Identify end-to-end business critical scenarios with minimal support Create/Review the test scenarios and prepare RTM Skill Examples: Ability to create and manage a test plan Ability to prepare schedules based on estimates Ability to track and report progress Ability to identify test scenarios and prepare RTM Ability to analyse requirement/user stories and prioritize testing Ability to carry out RCA Ability to capture and report metrics Knowledge Examples: Knowledge of Estimation techniques Knowledge of Testing standards Knowledge of identifying the scope of testing Knowledge of RCA Techniques Knowledge of Test design techniques Knowledge of Test methodologies Additional Comments: The following are the mandatory skill sets required for this SDET role- 1. Manual Testing 2. UI Automation & BDD – Eg. Selenium/Playwright, Cucumber/Behave 3. Python (Intermediate) 4. REST API Testing – AI/ML integrations 5. Performance & Load Testing 6. Cloud Experience – Azure / AWS / GCP 7. CI/CD – GitHub Actions 8. Databases: (Both SQL, NoSQL) a. SQL – PostgreSQL/MySQL b. NoSQL – MongoDB/Cosmos DB Skills Manual Testing,Ui Automation,Rest Api Integration,Python
Posted 1 week ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
What you will do here: •Work with multiple product development teams of engineers to design, develop, and test products and components using an agile, scrum methodology. •Highly motivated self-starter who loves ownership and responsibility while working in a collaborative and interdependent team environment. •Responsible for creating and providing innovative solutions that meet not only functional requirements, but also performance, scalability, and reliability requirements. •Continue to build an effective development team, setting goals, mentor and do performance reviews of team members. •Delivery of quality applications on time and on budget •Management and execution against project plans and delivery commitments. •Follow the software engineering best practices and audit the process and improve the standards of the practices. •Build, Guide and Coach the Scrum Team on how to use Agile practices and principles to deliver high quality products, facilitate and support all scrum events. •Ensure security, availability, resilience, and scalability of solutions developed by the teams. •Drive and manage the bug triage process, represent development team in project meetings to ensure efficient testing and bug fixing process, and be an effective advocate for the development group. What we’re looking for: •Experience leading a team of 5 or more engineers. •5+ years’ experience technologies like .NET, PHP, Python, .Java, Kotlin, Scala, •5+ years’ experience with relational DBs like SQL Server or MySQL. •Hands on Experience with Docker or Kubernetes. •Experience in working frontend frameworks like React JS / Angular JS •Experience in RESTful services, Microservice Architecture and Serverless Architecture. •Experience working with cloud environment like AWS / Azure. •Experience working within an Agile/Scrum and CI/CD environment. •Experience working with version control using GitLab / GitHub. •Experience in the design of new systems or redesign of existing systems to meet business requirements, changing needs, or newer technology. •Experience or Knowledge of one or more Front-End frameworks will be a strong plus. •Experience or Knowledge with NoSQL Database like MongoDB will be a plus. •Experience or Knowledge with AI/Machine Learning is a plus. •Master’s degree in computer science, Computer Engineering, or related technical discipline •Ability to handle multiple competing priorities in a fast-paced environment. •Excellence in technical communication with peers and non-technical cohorts •Knowledge of software engineering best practices including coding standards, code reviews, source control management, build processes, testing, and operations. Inclusion
Posted 1 week ago
10.0 years
2 - 4 Lacs
Gurgaon
On-site
Cloud Architecture Design: Lead the design and development of end-to-end cloud architectures for various applications and services, aligning with business objectives and technical requirements. Evaluate and select appropriate cloud services and technologies (e.g., compute, storage, networking, databases, serverless, containers) to meet specific project needs. Develop detailed architecture diagrams, technical specifications, and documentation for cloud solutions. Ensure designs adhere to best practices for security, reliability, performance, cost optimization, and operational efficiency (Well-Architected Framework principles). Technical Leadership & Guidance: Provide technical leadership and guidance to development and operations teams on cloud native development, deployment, and operational best practices. Act as a subject matter expert for cloud technologies, staying up-to-date with industry trends, new services, and emerging technologies. Mentor junior architects and engineers, fostering a culture of continuous learning and improvement. Stakeholder Collaboration: Collaborate closely with business analysts, product owners, and other stakeholders to understand business requirements and translate them into technical solutions. Present complex technical concepts to non-technical audiences clearly and concisely. Work with security teams to ensure cloud solutions meet security compliance and regulatory requirements. Implementation & Deployment Support: Oversee the implementation of cloud architectures, providing technical oversight and troubleshooting support during development and deployment phases. Define and implement CI/CD pipelines for automated deployment of cloud infrastructure and applications using Infrastructure as Code (IaC) tools (e.g., Terraform, CloudFormation, ARM templates). Participate in code reviews and ensure adherence to architectural standards. Cost Optimization & Governance: Monitor and optimize cloud resource utilization and costs, identifying opportunities for efficiency improvements. Develop and enforce cloud governance policies, standards, and best practices. Participate in capacity planning and forecasting for cloud resources. Troubleshooting & Problem Solving: Provide expert-level support for complex cloud-related issues, performing root cause analysis and implementing effective solutions. Qualifications: Education: Bachelor's degree in Computer Science, Information Technology, Engineering, or a related field. Experience: 10+ years of progressive experience in IT with at least 5-7 years of hands-on experience in designing and implementing cloud solutions on one or more major cloud platforms (AWS, Azure, GCP). Proven experience as a Solution Architect or similar role in an enterprise environment. Strong experience with Infrastructure as Code (IaC) tools (Terraform, CloudFormation, ARM templates). Experience with containerization technologies (Docker, Kubernetes) and serverless computing. Solid understanding of networking concepts, security principles, and database technologies in a cloud context. Experience with CI/CD pipelines and DevOps practices. Familiarity with agile development methodologies. Technical Skills (Proficient in at least one major cloud platform and experienced in others): AWS: EC2, S3, RDS, Lambda, VPC, IAM, SQS, SNS, ECS, EKS, CloudFormation, API Gateway. Azure: Virtual Machines, Storage Accounts, SQL Database, Azure Functions, Virtual Networks, Azure AD, AKS, Azure Resource Manager. GCP: Compute Engine, Cloud Storage, Cloud SQL, Cloud Functions, VPC, IAM, GKE, Cloud Deployment Manager. Programming Languages: Proficiency in at least one scripting language (e.g., Python, PowerShell, Bash) for automation. Databases: Relational (e.g., PostgreSQL, MySQL, SQL Server) and NoSQL (e.g., MongoDB, Cassandra, DynamoDB). Operating Systems: Linux and Windows. Certifications (Preferred): AWS Certified Solutions Architect – Professional Azure Solutions Architect Expert Google Cloud Professional Cloud Architect Soft Skills: Excellent analytical and problem-solving skills. Strong written and verbal communication skills, with the ability to articulate complex technical concepts to diverse audiences. Exceptional interpersonal and collaboration skills. Ability to work independently and as part of a team in a fast-paced, dynamic environment. Strong leadership and mentoring capabilities. Proactive and self-motivated with a strong desire to learn and grow.
Posted 1 week ago
0 years
3 - 8 Lacs
Gurgaon
On-site
Engineering Lead - Java Pluang Technologies India Private Limited, Gurgaon, Haryana, India Department Back End - Crypto Job posted on Aug 05, 2025 Employment type Full Time About us Pluang is Indonesia’s leading multi-asset investment platform, offering products such as Crypto, US stocks, mutual funds, and gold. At Pluang, we're on a mission to redefine the way people invest. As one of the fastest-growing fintech platforms in Indonesia, we empower users to achieve financial independence through a seamless, innovative, and secure investment experience.Explore more about us here! We’re looking for an innovative Engineering Lead - Java to lead the charge in building scalable, high-performance systems. You’ll tackle complex challenges, shape the platform’s architecture, and inspire a team to deliver cutting-edge solutions that drive business success and innovation. What You Will Do Lead the development of scalable, high-performance systems using Java, Spring Boot, and microservices. Shape the platform’s architecture to ensure optimal performance and reliability. Mentor and inspire a talented team, driving innovation and best practices. Solve complex challenges, delivering solutions that fuel growth and product success. Make a direct impact, lead impactful projects, and grow your career in a dynamic environment. What We're Looking For Strong foundation in Data Structures, Algorithms, and Microservices, with experience building scalable, high-performance systems. Expert in Java (SpringBoot, Hibernate, JUnit), RESTful APIs, and continuous integration practices. Hands-on experience with Redis, Kafka, and RDBMS/NoSQL (PostgreSQL, MongoDB) for real-time data processing. Solid understanding of unit/integration testing with frameworks like JUnit and CI/CD pipelines. Fintech experience (preferred) and familiarity with BDD frameworks like Gauge. We are an equal-opportunity employer and value diversity at our company. We do not discriminate based on race, religion, colour, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. Want the inside scoop on our culture, the interview process, and the amazing team at Pluang? Click here to find out! Curious about what it's like to work with us? Get a glimpse of life at Pluang through the eyes of our team! Check out our Instagram! Engineering Lead - Java Pluang Technologies India Private Limited, Gurgaon, Haryana, India
Posted 1 week ago
2.0 years
4 - 6 Lacs
Gurgaon
On-site
Job Summary: We are looking for a talented Software Developer with 2-5 years of experience to join our development team. The ideal candidate should have a strong understanding of React.js, JavaScript (ES6+), TypeScript, and modern frontend development practices. You will be responsible for developing user-friendly web applications, integrating APIs, and ensuring high performance and responsiveness of applications. Roles & Responsibilities: Develop responsive and interactive web applications using React.js, Redux, and TypeScript. Implement reusable UI components while maintaining modularity and scalability. Optimize web applications for performance, accessibility, and cross-browser compatibility. Integrate RESTful APIs, GraphQL, and third-party services into web applications. Collaborate with UX/UI designers to create seamless user experiences. Work closely with backend developers to define API contracts and ensure efficient data flow. Write clean, maintainable, and well-documented code following best coding practices. Troubleshoot bugs, optimize performance, and enhance UI functionality. Implement unit and integration tests using testing libraries. Stay updated with latest trends, tools, and best practices in React.js and frontend development. Participate in code reviews and provide constructive feedback to other team members. Required Skills & Qualifications: Technical Skills: Strong proficiency in React.js, Redux, and React Hooks. Experience with JavaScript (ES6+), TypeScript, and Next.js (preferred but not mandatory). Good understanding of HTML5, CSS3, SCSS, and CSS-in-JS libraries (Styled Components, Tailwind CSS, etc.). Hands-on experience with RESTful APIs, GraphQL, and WebSockets. Good knowledge of Relational Databases, SQL and NoSQL (MongoDB). Knowledge of state management solutions such as Redux, Context API. Familiarity with component libraries like Material-UI, Ant Design, or Chakra UI. Experience with Webpack, Babel, Vite, and other frontend build tools. Strong debugging and troubleshooting skills using Chrome DevTools, React DevTools. Experience in unit testing and end-to-end testing using Testing Libraries. Knowledge of authentication and authorization using JWT, OAuth, Firebase, or Auth0. Familiarity with CI/CD pipelines, Docker, and deployment processes is a plus. Soft Skills: Strong problem-solving and analytical skills. Excellent communication and collaboration skills. Ability to work in an agile development environment (Scrum/Kanban). Self-motivated with a passion for learning new technologies. Ability to handle multiple tasks and prioritize efficiently. Candidate Profile: Experience: 2-5 years of professional experience in frontend development using React.js. Education: Bachelor's or Master’s degree in Computer Science, Engineering, or a related field. Portfolio/GitHub: Candidates with an active GitHub profile or portfolio showcasing React.js projects will be preferred. Preferred Qualifications (Nice to Have): Experience with server-side rendering (SSR) and static site generation (SSG) using Next.js. Knowledge of microfrontend architecture and modular frontend development. Experience with WebSockets and real-time applications. Understanding of Progressive Web Apps (PWA) and Service Workers. Work Environment & Benefits: Opportunity to work on exciting and cutting-edge technologies. Competitive salary and performance-based incentives. Health insurance and other benefits. Learning and development programs, including certifications and hackathons
Posted 1 week ago
0 years
8 Lacs
Gurgaon
On-site
Software Engineer (SDE-1) - DevOps Pluang Technologies India Private Limited, Gurgaon, Haryana, India Department Engineering Job posted on Sep 05, 2024 Employment type Full Time Position Description: At Pluang we are looking for a smart DevOps Engineer who is passionate about technology and loves coding and designing systems. You’ll ensure the stability and efficiency of our technical infrastructure by identifying recurring issues and creating automation scripts. To succeed, you’ll need hands-on experience with cloud platforms (AWS/GCP), containerization, and Linux systems and strong scripting skills. Self-motivation and fintech experience are advantageous. Founded in 2018, Pluang pioneered the concept of making saving and investing more inclusive – more affordable and accessible to more people through our micro-investment products. Our mission today remains the same – we measure our success by the value we deliver to our customers, whether by structuring and/or providing access to more financial products through which our customers can get closer to their financial goals. Visit the career section on our website for more details What you’ll be doing broadly: Provide technical expertise and guidance to developers during issue resolution, helping them troubleshoot problems, suggest solutions, and ensure quick and effective resolution of technical challenges. Creating and building automation using bash, Python, etc. Creating and maintaining various Ansible playbooks for automation pieces. Hands-On Experience on Kubernetes. CI/CD Automation with Jenkins and Argocd. Administer and troubleshoot Linux-based systems. Troubleshoot problems across a wide array of services and functional areas. Work closely with development teams to optimize application performance. Identify means to optimize infrastructure utilization and reduce costs. Demonstrate expertise in managing and optimizing infrastructure on AWS . Collaborate with cross-functional teams to ensure seamless integration with cloud services. What you need to be successful for the role: Sit with teams and design end-to-end monitoring of the APIs and relevant workloads that are critical. Hands-on experience with Cloud platforms such as AWS or any other private cloud environments. Understanding of API management, rate limiting, authentication, and monitoring. Strong understanding of microservices principles and best practices Strong experience in Container Technologies (Docker/ Kubernetes ) and in containerizing applications. Knowledge of Monitoring & logging stacks like ELK, EFK, Prometheus, Grafana etc. Strong knowledge of Linux distributions (ubuntu, Centos, and RHEL) System troubleshooting and problem-solving skills across platform and application domains. Proficiency in any programming or scripting language such as Shell Script and Python. Experience with infrastructure-as-code (e.g. Terraform) Experience with continuous integration, unit, and integration testing. Experience with RDBMS and NoSQL databases - PostgreSQL, MongoDB Ability to work independently with minimal direction; self-starter/self-motivated. Fintech experience - advantageous Work Environment Details: Attractive compensation package - competitive salary, flexible bonus scheme. We are always looking for ways to promote and inspire innovation. So, come build your dream with us. Individual career path - management and technical career growth, enhanced by a learning and development program, regular performance assessment, and teams of multi-national IT professionals. Healthy work environment - company-sponsored medical program, food, and beverage program, open communication. Friendly policies to support Work-life balance, team building, and celebrations. We are an equal-opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
Posted 1 week ago
6.0 years
4 - 7 Lacs
Gurgaon
On-site
DESCRIPTION AWS Sales, Marketing, and Global Services (SMGS) is responsible for driving revenue, adoption, and growth from the largest and fastest growing small- and mid-market accounts to enterprise-level customers including public sector Are you a Cloud Consultant who has hands-on experience with building cloud-native applications? Would you like to work with our customers to help them architect, develop and re-engineer applications to fully leverage the AWS Cloud? Do you like to work on a variety of latest technology stack, business-critical projects at the forefront of application development and cloud technology adoption? AWS ProServe India LLP is looking for an experienced cloud consultant, you will work with our internal customers in architecting, developing and re-engineering applications that can fully leverage the AWS Cloud in India. You will work on a variety of game changing projects, at the forefront of application development and cloud technology adoption. Achieving success will require coordination across many internal AWS teams and external AWS Partners, with impact and visibility at the highest levels of the company. In order for applications to be cloud optimized they need to be architected correctly enabling them to reap the benefits of elasticity, horizontal scalability, automation and high availability. On the AWS platform services such as Amazon EC2, Auto Scaling, Elastic Load Balancing, AWS Elastic Beanstalk, Serverless Architectures, Amazon Elastic Container Services to name just a few, provide opportunities to design and build cloud ready applications. Key job responsibilities We are looking for hands on application developers with: Full stack app developer with hand-on experience in design and development front-end and back-end for web applications, APIs, microservices, and data integrations Proficiency in at least one programming language such as Java, Python, Go (Golang), or JavaScript/TypeScript, along with practical experience in modern frameworks and libraries like Angular, ReactJS, Vue.js, or Node.js. Working knowledge of AWS services, experience with both SQL and NoSQL databases, and familiarity with modern communication protocols such as gRPC, WebSockets, and GraphQL. Knowledge of cloud-native design patterns, including microservices architecture and event-driven systems. Demonstrated experience building scalable and highly available applications on AWS, leveraging services such as Lambda, ECS, API Gateway, DynamoDB, S3, etc. Preferred experience in optimizing cloud-based architectures for scalability, security, and high performance. Experience working in Agile development environments, with a strong focus on iterative delivery and continuous improvement. Ability to advise on and implement AWS best practices across application development, deployment, and monitoring About the team Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship and Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. BASIC QUALIFICATIONS 6+ years of experience in application technologies with 4+ years on any Cloud Platform. Programming Language experience (e.g. JavaScript Frameworks, Java, Python, Golang, etc.) with good understanding of OOAD principles Experience developing Microservices architecture and API Frameworks supporting application development. Experience in designing architecture for highly available systems that utilize load balancing, horizontal scalability and high availability. Hands-on experience using AI-powered developer tools PREFERRED QUALIFICATIONS Experience leading the design, development and deployment of business software at scale or recent hands-on technology infrastructure, network, compute, storage, and virtualization experience Experience and technical expertise (design and implementation) in cloud computing technologies Experience leading the design, development and deployment of business software at scale or recent hands-on technology infrastructure, network, compute, storage, and virtualization experience Experience and technical expertise (design and implementation) in cloud computing technologies A passion for exploring and adopting emerging technologies, with a growth mindset and curiosity to experiment and innovate. Ability to think strategically across business needs, product strategy, and technical implementation, contributing to high-impact decisions. Code generation platforms (e.g. GitHub, AmazonQ Developer). Automated test case generation and AI-assisted code reviews. Integrating machine learning models into applications e.g., recommendation engines, NLP-based search, predictive analytics. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 1 week ago
6.0 years
0 Lacs
Delhi, India
On-site
Senior AI Engineer – Full Stack | AWS Architect | Pinecone Agentic AI Specialist Job Summary: We are seeking a highly experienced Senior Full Stack Engineer with a multidisciplinary background to join our team. The ideal candidate will serve in multiple capacities—as an AWS Solutions Architect, Backend Developer (Node.js), Database Engineer, and Data Analyst/Engineer—and will bring deep expertise in agentic AI workflows and Pinecone vector databases. You’ll be responsible for designing and building secure, scalable systems, driving data-driven decisions, and integrating cutting-edge AI agentic workflows using Pinecone for personalized and contextual automation. A hands-on approach to architecture, performance tuning, data modeling, and team collaboration in a Scrum environment is essential. Key Responsibilities: Architecture & Engineering • Architect and develop scalable solutions using AWS cloud services (Lambda, API Gateway, Cognito, RDS, S3, etc.) • Lead the end-to-end backend development using Node.js (REST APIs, microservices) • Build and optimize relational and NoSQL databases (PostgreSQL, DynamoDB, etc.) • Integrate Pinecone vector search with agentic workflows for semantic search, AI decisioning, and personalization Agentic AI & Pinecone • Implement and optimize agent-based workflows using orchestration tools (e.g., LangGraph, LangChain, custom DAGs) • Leverage Pinecone for retrieval-augmented generation (RAG), embedding search, and LLM context enrichment • Collaborate with AI engineers to support GenAI-driven product features Data Engineering & Analytics • Build and maintain data pipelines for structured/unstructured data (ETL/ELT) • Perform data analysis, cleansing, and transformation to support business insights • Optimize data storage, indexing, and access patterns for performance Scrum & Team Collaboration • Participate in daily stand-ups, sprint planning, retrospectives • Work collaboratively with frontend, AI, and product teams to deliver features on time • Contribute to technical documentation, testing, and code reviews Programming Languages & Technologies: • Backend: Node.js (JavaScript/TypeScript), Python (for scripting and AI integration) • Frontend (optional but preferred): React, Next.js • Cloud & DevOps: AWS (Lambda, Cognito, API Gateway, RDS, DynamoDB, S3, IAM, VPC), Docker, Git, CI/CD tools • Databases: PostgreSQL, DynamoDB, Redis • AI & Agentic Tools: Pinecone, LangChain, LangGraph, OpenAI APIs • Data Engineering: SQL, Python (Pandas, NumPy), ETL tools • Other Tools: REST APIs, GraphQL (optional), Terraform or CloudFormation (nice to have) Required Qualifications: • 6+ years of experience as a full stack/backend engineer with a focus on Node.js • 3+ years of experience as an AWS architect, deploying production systems • Proven experience with RDBMS/NoSQL databases and query optimization • Deep understanding of Pinecone, vector databases, and AI agent orchestration • Prior experience as a Data Engineer, including pipeline design, analytics, and modeling • Solid grasp of agentic architectures, RAG, LangChain/LangGraph, or equivalent frameworks • Strong understanding of Scrum/Agile methodologies • Excellent communication skills and ability to lead cross-functional discussions Preferred Skills: • Experience with embedding models, HuggingFace, or LLM fine-tuning • Familiarity with frontend systems (React/Next.js) to support end-to-end development • Knowledge of data privacy and compliance (GDPR, HIPAA, SOC 2)
Posted 1 week ago
1.5 years
3 - 9 Lacs
Gurgaon
On-site
Know your role in Deloitte About Deloitte “Deloitte” is the brand under which tens of thousands of dedicated professionals in independent firms throughout the world collaborate to provide audit, consulting, financial advisory, risk management, and tax services to selected clients. These firms are members of Deloitte Touche Tohmatsu Limited (DTTL), a UK private company limited by guarantee. Each member firm provides services in a particular geographic area and is subject to the laws and professional regulations of the particular country or countries in which it operates. DTTL and each DTTL member firm are separate and distinct legal entities. Each DTTL member firm is structured differently in accordance with national laws, regulations, customary practice, and other factors and may secure the provision of professional services in their territories through subsidiaries, affiliates, and/or other entities. In the United States, Deloitte LLP is the member firm of DTTL. Services are primarily provided by the subsidiaries of Deloitte LLP, including: Deloitte & Touche LLP Deloitte Consulting LLP Deloitte Financial Advisory Services LLP Deloitte Tax LLP In India, Deloitte LLP has the following indirect subsidiaries: Deloitte & Touche Assurance & Enterprise Risk Services India Private Limited, Deloitte Consulting India Private Limited, Deloitte Financial Advisory Services India Private Limited, Deloitte Tax Services India Private Limited, and Deloitte Support Services India Private Limited. These entities primarily render services to their respective U.S.-based parents. U.S. India Deloitte Tax Services India Private Limited Deloitte Tax Services India Private Limited (“Deloitte Tax Services India”) commenced operations in January 2002. Since then, nearly all of the Deloitte Tax LLP (“Deloitte Tax”) U.S. service lines and region have developed their affiliations in India. Deloitte Tax offers you immense opportunities to learn and practice U.S. Taxation, a much sought after career option. Our Vision : To be the dominant global provider of tax services delivering unmatched value to our clients and our people through sustained relationships. U.S. India Tax — TTO Deloitte Tax LLP’s Tax Transformation (TTO) team is responsible for the design, development and deployment of tax tools for the US Tax Practice. The professionals on the TTO team are focused on assisting Deloitte Tax in its efforts to deliver quality, comprehensive, value-added, and efficient client services using tax tools to our clients. The team consults and executes on a wide range of initiatives involving tax technology management, tool development and implementation. Job description Function Deloitte Tax Services India Private Limited Service line TAX TTO Job level BTA .NET Angular Fullstack Developer Specific skill set required C#, .NET & .NET Core, ASP.Net Core, SQL Server, OOP Concepts, ASP.NET Web API, Entity Framework 6 or above, Azure, Microservices architecture, Mongo DB, Database performance tuning, Applying Design Patterns, VSTS / Azure DevOps, Agile with Angular Graduation BE/B.Tech., M.C.A., M.Sc. Comp Sc., M.Tech. Professional qualification Work experience 1.5-2.5 years The key job responsibilities include the following: Participate in requirements analysis. Collaborate with US and Vendors’ teams to produce software design and architecture. Write clean, scalable code using .NET programming languages. Test and deploy applications and systems. Revise, update, refactor and debug code. Develop, support and maintain applications and technology solutions. Ensure that all development efforts meet or exceed client expectations. Applications should meet requirements of scope, functionality, and time and adhere to all defined and agreed upon standards. Become familiar with all development tools, testing tools, methodologies and processes. Become familiar with the project management methodology and processes. Encourage collaborative efforts and camaraderie with on-shore and off-shore team members. Demonstrate a strong working understanding of the industry best standards in software development and version controlling. Ensure the quality and low bug rates of code released into production. Work on agile projects, participate in daily SCRUM calls and provide task updates. During design and key development phases, might need to work a staggered shift from 2pm-11pm to ensure appropriate overlap of the India and US teams and project deliveries. Key skills required: Strong hands-on experience on C#, SQL Server, OOPS Concepts, Micro Services Architecture. At least one year hands-on experience on .NET Core, ASP.NET Core Web API, SQL, NoSQL, Entity Framework 6 or above, Azure, Database performance tuning, Applying Design Patterns, Agile. At least 2 years Hands-on experience on Angular 10+ Hands on experience over consuming web API using Angular(FE, BE Integration). Skill for writing reusable libraries. Excellent Communication skills both oral & written. Excellent troubleshooting and communication skills, ability to communicate clearly with US counterparts. Additional Information/Nice to Have: Mongo DB, NPM and Azure Devops Build/Release configuration. Self-starter with solid analytical and problem-solving skills. Willingness to work extra hours to meet deliverables. #CA-TG #CA-HPN Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India. Benefits to help you thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 308160
Posted 1 week ago
0 years
0 Lacs
Gurgaon
On-site
A Backend Developer with Node.js experience is a software engineer who specializes in building and maintaining the server-side logic and infrastructure of web applications using the Node.js runtime environment. This role is crucial for creating scalable, high-performance applications that handle data, user requests, and integrations with other services. Here is a comprehensive job description, which can be adapted for a specific role and company: Job Title: Node.js Backend Developer Job Summary We are seeking a talented and experienced Node.js Backend Developer to join our team. The ideal candidate will be responsible for designing, developing, and maintaining the server-side components of our web applications. You will work in a fast-paced environment, collaborating with cross-functional teams to deliver secure, scalable, and high-performance solutions. Key Responsibilities Develop and maintain robust, efficient, and scalable backend services and APIs using Node.js. Design and implement RESTful APIs and/or GraphQL to facilitate seamless communication between the server and client-side applications. Integrate user-facing elements developed by front-end developers with server-side logic. Design, implement, and optimize database schemas and queries for both SQL (e.g., MySQL, PostgreSQL) and NoSQL (e.g., MongoDB, Redis) databases. Implement security best practices, including authentication, authorization (e.g., JWT, OAuth), and data encryption, to protect sensitive data. Write clean, well-documented, and testable code. Participate in code reviews to maintain high quality and adherence to best practices. Troubleshoot and debug applications to identify and resolve performance issues and bugs. Collaborate with product managers, DevOps, and front-end developers to define project requirements and deliver on business objectives. Stay up-to-date with the latest trends and technologies in Node.js and the broader web development ecosystem. Utilize and maintain CI/CD pipelines and work with containerization tools like Docker. Required Skills and Qualifications Proven experience as a Backend Developer, with significant experience in Node.js. Strong proficiency in JavaScript (including ES6+) and a deep understanding of asynchronous programming (callbacks, Promises, async/await). Expertise with popular Node.js frameworks such as Express.js, NestJS, or Koa.js. Experience in designing and developing RESTful APIs. Hands-on experience with database management systems, including schema design, query optimization, and data modeling. Familiarity with code versioning tools, such as Git. Knowledge of software development methodologies (e.g., Agile, Scrum). Excellent problem-solving and analytical skills, with a keen attention to detail. Strong communication and teamwork abilities. Preferred Qualifications Bachelor's degree in Computer Science, Engineering, Experience with microservices architecture. Familiarity with cloud platforms such as AWS, Azure, or Google Cloud. Experience with server-side rendering (SSR) frameworks. A basic understanding of front-end technologies (HTML, CSS, and modern JavaScript frameworks like React, Angular, or Vue.js) to facilitate better collaboration. Job Type: Full-time Location Type: In-person Work Location: In person Speak with the employer +91 9773776826 Expected Start Date: 11/08/2025
Posted 1 week ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
This role is for one of the Weekday's clients Salary range: Rs 1700000 - Rs 2500000 (ie INR 17-25 LPA) Min Experience: 5 years Location: Chennai, anywhere in tamilnadu JobType: full-time Notice Period: Immediate joiners or within 30 days preferred Qualification: Bachelor's degree in Computer Science, Information Technology, or a related field Requirements Key Responsibilities & Skills Core Competencies Automation: Expertise in automating infrastructure using tools like CDK, CloudFormation, and Terraform CI/CD Pipelines: Hands-on experience with GitHub Actions, Jenkins, or similar tools for continuous integration and deployment Monitoring & Observability: Familiar with tools like OpenTelemetry, Prometheus, and Grafana API & Load Balancing: Understanding of REST, gRPC, Protocol Buffers, API Gateway, and load balancing techniques Technical Requirements Strong foundation in Linux OS and system concepts Experience handling production issues and ensuring system reliability Proficient in at least one programming or scripting language (Python, Go, or Shell) Familiarity with Docker, microservices architecture, and cloud-native tools like Kubernetes Understanding of RDBMS/NoSQL databases such as PostgreSQL and MongoDB Additional Skills Awareness of security practices including OWASP, static code analysis, etc. Familiarity with fintech security standards (e.g., PCI-DSS, SOC2) is a plus AWS certifications are an added advantage Knowledge of AWS data services like DMS, Glue, Athena, and Redshift is a bonus Experience working in start-up environments and with distributed teams is desirable Key Skills DevOps | AWS | Terraform | CI/CD | Linux | Docker | Kubernetes | Python | Infrastructure Automation | Monitoring & Observability
Posted 1 week ago
3.0 years
1 - 8 Lacs
Mohali
On-site
Company: Chicmic Studios Job Role: Python Machine Learning & AI Developer Experience Required: 3+ Years We are looking for a highly skilled and experienced Python Developer to join our dynamic team. The ideal candidate will have a robust background in developing web applications using Django and Flask, with expertise in deploying and managing applications on AWS. Proficiency in Django Rest Framework (DRF), a solid understanding of machine learning concepts, and hands-on experience with tools like PyTorch, TensorFlow, and transformer architectures are essential. Key Responsibilities Develop and maintain web applications using Django and Flask frameworks. Design and implement RESTful APIs using Django Rest Framework (DRF) Deploy, manage, and optimize applications on AWS services, including EC2, S3, RDS, Lambda, and CloudFormation. Build and integrate APIs for AI/ML models into existing systems. Create scalable machine learning models using frameworks like PyTorch , TensorFlow , and scikit-learn . Implement transformer architectures (e.g., BERT, GPT) for NLP and other advanced AI use cases. Optimize machine learning models through advanced techniques such as hyperparameter tuning, pruning, and quantization. Deploy and manage machine learning models in production environments using tools like TensorFlow Serving , TorchServe , and AWS SageMaker . Ensure the scalability, performance, and reliability of applications and deployed models. Collaborate with cross-functional teams to analyze requirements and deliver effective technical solutions. Write clean, maintainable, and efficient code following best practices. Conduct code reviews and provide constructive feedback to peers. Stay up-to-date with the latest industry trends and technologies, particularly in AI/ML. Required Skills and Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field. 3+ years of professional experience as a Python Developer. Proficient in Python with a strong understanding of its ecosystem. Extensive experience with Django and Flask frameworks. Hands-on experience with AWS services for application deployment and management. Strong knowledge of Django Rest Framework (DRF) for building APIs. Expertise in machine learning frameworks such as PyTorch , TensorFlow , and scikit-learn . Experience with transformer architectures for NLP and advanced AI solutions. Solid understanding of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB). Familiarity with MLOps practices for managing the machine learning lifecycle. Basic knowledge of front-end technologies (e.g., JavaScript, HTML, CSS) is a plus. Excellent problem-solving skills and the ability to work independently and as part of a team. Strong communication skills and the ability to articulate complex technical concepts to non-technical stakeholders. Contact : 9875952836 Office Location: F273, Phase 8b Industrial Area Mohali, Punjab. Job Type: Full-time Pay: ₹16,472.07 - ₹68,456.84 per month Application Question(s): What is your Native & Current Location? * What is your Highest Qualification? Experience: AI & ML: 2 years (Required) Work Location: In person
Posted 1 week ago
0 years
1 - 7 Lacs
India
On-site
Apply only if you are open for WFO (Mohali) and Face to face interview. Roles and Responsibilities:- We are currently recruiting an experienced PHP Laravel Developer (Experience:- 1-4) Years, based in Mohali, to assist in the provision of an efficient and effective Recruitment and Selection service and to be responsible for the provision of the full range of HR services that meet the needs of our company. Job Profile and Responsibilities:- Proven software development experience in PHP Experience in Core PHP using MVC Architecture Ability to write API’s including Restful API’s Understanding of open source projects like Taxi dispatch system, Social, E-Commerce, etc Demonstrable knowledge of Web technologies including Laravel, CodeIgniter framework Knowledge of front end technologies (HTML, CSS, JawaScript, JQuery, etc) Good knowledge of relational databases (My SQL/NoSQL, MongoDB), version control tools and of developing web services Experience in common third-party APIs (Google, Facebook, Shopify, Google maps etc) Knowledge of Payment gateway integrations like Razorpay, Stripe, Paypal etc Passion for best design and coding practices and a desire to develop new bold ideas BS/MS degree in Computer Science, Engineering or a related subject Write “clean”, well-designed code Produce detailed specifications Troubleshoot, test and maintain the core product software and databases to ensure strong optimization and functionality Contribute in all phases of the development lifecycle Follow industry best practices Develop and deploy new features to facilitate related procedures and tools if necessary Company Address for Interview : Plot no, ITC 10, 3rd Floor, near Municipal building / Opp. of SBI building / near Sec-67 Market, Building Name :- World tech 67, Sector 67, Sahibzada Ajit Singh Nagar, Punjab 160062 https://g.co/kgs/JuoBR5z Kindly come for the interview round between Mon - Fri. Timings :- 11AM - 6PM Whom to Meet :- Gurpreet / Prakash Job Types: Full-time, Permanent Pay: ₹15,000.00 - ₹60,000.00 per month Benefits: Food provided Paid sick time Paid time off Location Type: In-person Schedule: Day shift Fixed shift Monday to Friday Morning shift Weekend availability Work Location: In person
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) – Senior – Senior Data Scientist Role Overview: We are seeking a highly skilled and experienced Senior Data Scientist with a minimum of 3 - 7 years of experience in Data Science and Machine Learning, preferably with experience in NLP, Generative AI, LLMs, MLOps, Optimization techniques, and AI solution Architecture. In this role, you will play a key role in the development and implementation of AI solutions, leveraging your technical expertise. The ideal candidate should have a deep understanding of AI technologies and experience in designing and implementing cutting-edge AI models and systems. Additionally, expertise in data engineering, DevOps, and MLOps practices will be valuable in this role. Responsibilities: Your technical responsibilities: Contribute to the design and implementation of state-of-the-art AI solutions. Assist in the development and implementation of AI models and systems, leveraging techniques such as Language Models (LLMs) and generative AI. Collaborate with stakeholders to identify business opportunities and define AI project goals. Stay updated with the latest advancements in generative AI techniques, such as LLMs, and evaluate their potential applications in solving enterprise challenges. Utilize generative AI techniques, such as LLMs, to develop innovative solutions for enterprise industry use cases. Integrate with relevant APIs and libraries, such as Azure Open AI GPT models and Hugging Face Transformers, to leverage pre-trained models and enhance generative AI capabilities. Implement and optimize end-to-end pipelines for generative AI projects, ensuring seamless data processing and model deployment. Utilize vector databases, such as Redis, and NoSQL databases to efficiently handle large-scale generative AI datasets and outputs. Implement similarity search algorithms and techniques to enable efficient and accurate retrieval of relevant information from generative AI outputs. Collaborate with domain experts, stakeholders, and clients to understand specific business requirements and tailor generative AI solutions accordingly. Conduct research and evaluation of advanced AI techniques, including transfer learning, domain adaptation, and model compression, to enhance performance and efficiency. Establish evaluation metrics and methodologies to assess the quality, coherence, and relevance of generative AI outputs for enterprise industry use cases. Ensure compliance with data privacy, security, and ethical considerations in AI applications. Leverage data engineering skills to curate, clean, and preprocess large-scale datasets for generative AI applications. Requirements: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. A Ph.D. is a plus. Minimum 3-7 years of experience in Data Science and Machine Learning. In-depth knowledge of machine learning, deep learning, and generative AI techniques. Proficiency in programming languages such as Python, R, and frameworks like TensorFlow or PyTorch. Strong understanding of NLP techniques and frameworks such as BERT, GPT, or Transformer models. Familiarity with computer vision techniques for image recognition, object detection, or image generation. Experience with cloud platforms such as Azure, AWS, or GCP and deploying AI solutions in a cloud environment. Expertise in data engineering, including data curation, cleaning, and preprocessing. Knowledge of trusted AI practices, ensuring fairness, transparency, and accountability in AI models and systems. Strong collaboration with software engineering and operations teams to ensure seamless integration and deployment of AI models. Excellent problem-solving and analytical skills, with the ability to translate business requirements into technical solutions. Strong communication and interpersonal skills, with the ability to collaborate effectively with stakeholders at various levels. Understanding of data privacy, security, and ethical considerations in AI applications. Track record of driving innovation and staying updated with the latest AI research and advancements. Good to Have Skills: Apply trusted AI practices to ensure fairness, transparency, and accountability in AI models and systems. Utilize optimization tools and techniques, including MIP (Mixed Integer Programming). Drive DevOps and MLOps practices, covering continuous integration, deployment, and monitoring of AI models. Implement CI/CD pipelines for streamlined model deployment and scaling processes. Utilize tools such as Docker, Kubernetes, and Git to build and manage AI pipelines. Apply infrastructure as code (IaC) principles, employing tools like Terraform or CloudFormation. Implement monitoring and logging tools to ensure AI model performance and reliability. Collaborate seamlessly with software engineering and operations teams for efficient AI model integration and deployment. Familiarity with DevOps and MLOps practices, including continuous integration, deployment, and monitoring of AI models. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Description: EY GDS – Data and Analytics (D&A) – Senior – Senior Data Scientist Role Overview: We are seeking a highly skilled and experienced Senior Data Scientist with a minimum of 3 - 7 years of experience in Data Science and Machine Learning, preferably with experience in NLP, Generative AI, LLMs, MLOps, Optimization techniques, and AI solution Architecture. In this role, you will play a key role in the development and implementation of AI solutions, leveraging your technical expertise. The ideal candidate should have a deep understanding of AI technologies and experience in designing and implementing cutting-edge AI models and systems. Additionally, expertise in data engineering, DevOps, and MLOps practices will be valuable in this role. Responsibilities: Your technical responsibilities: Contribute to the design and implementation of state-of-the-art AI solutions. Assist in the development and implementation of AI models and systems, leveraging techniques such as Language Models (LLMs) and generative AI. Collaborate with stakeholders to identify business opportunities and define AI project goals. Stay updated with the latest advancements in generative AI techniques, such as LLMs, and evaluate their potential applications in solving enterprise challenges. Utilize generative AI techniques, such as LLMs, to develop innovative solutions for enterprise industry use cases. Integrate with relevant APIs and libraries, such as Azure Open AI GPT models and Hugging Face Transformers, to leverage pre-trained models and enhance generative AI capabilities. Implement and optimize end-to-end pipelines for generative AI projects, ensuring seamless data processing and model deployment. Utilize vector databases, such as Redis, and NoSQL databases to efficiently handle large-scale generative AI datasets and outputs. Implement similarity search algorithms and techniques to enable efficient and accurate retrieval of relevant information from generative AI outputs. Collaborate with domain experts, stakeholders, and clients to understand specific business requirements and tailor generative AI solutions accordingly. Conduct research and evaluation of advanced AI techniques, including transfer learning, domain adaptation, and model compression, to enhance performance and efficiency. Establish evaluation metrics and methodologies to assess the quality, coherence, and relevance of generative AI outputs for enterprise industry use cases. Ensure compliance with data privacy, security, and ethical considerations in AI applications. Leverage data engineering skills to curate, clean, and preprocess large-scale datasets for generative AI applications. Requirements: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. A Ph.D. is a plus. Minimum 3-7 years of experience in Data Science and Machine Learning. In-depth knowledge of machine learning, deep learning, and generative AI techniques. Proficiency in programming languages such as Python, R, and frameworks like TensorFlow or PyTorch. Strong understanding of NLP techniques and frameworks such as BERT, GPT, or Transformer models. Familiarity with computer vision techniques for image recognition, object detection, or image generation. Experience with cloud platforms such as Azure, AWS, or GCP and deploying AI solutions in a cloud environment. Expertise in data engineering, including data curation, cleaning, and preprocessing. Knowledge of trusted AI practices, ensuring fairness, transparency, and accountability in AI models and systems. Strong collaboration with software engineering and operations teams to ensure seamless integration and deployment of AI models. Excellent problem-solving and analytical skills, with the ability to translate business requirements into technical solutions. Strong communication and interpersonal skills, with the ability to collaborate effectively with stakeholders at various levels. Understanding of data privacy, security, and ethical considerations in AI applications. Track record of driving innovation and staying updated with the latest AI research and advancements. Good to Have Skills: Apply trusted AI practices to ensure fairness, transparency, and accountability in AI models and systems. Utilize optimization tools and techniques, including MIP (Mixed Integer Programming). Drive DevOps and MLOps practices, covering continuous integration, deployment, and monitoring of AI models. Implement CI/CD pipelines for streamlined model deployment and scaling processes. Utilize tools such as Docker, Kubernetes, and Git to build and manage AI pipelines. Apply infrastructure as code (IaC) principles, employing tools like Terraform or CloudFormation. Implement monitoring and logging tools to ensure AI model performance and reliability. Collaborate seamlessly with software engineering and operations teams for efficient AI model integration and deployment. Familiarity with DevOps and MLOps practices, including continuous integration, deployment, and monitoring of AI models. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) – Senior – Senior Data Scientist Role Overview: We are seeking a highly skilled and experienced Senior Data Scientist with a minimum of 3 - 7 years of experience in Data Science and Machine Learning, preferably with experience in NLP, Generative AI, LLMs, MLOps, Optimization techniques, and AI solution Architecture. In this role, you will play a key role in the development and implementation of AI solutions, leveraging your technical expertise. The ideal candidate should have a deep understanding of AI technologies and experience in designing and implementing cutting-edge AI models and systems. Additionally, expertise in data engineering, DevOps, and MLOps practices will be valuable in this role. Responsibilities: Your technical responsibilities: Contribute to the design and implementation of state-of-the-art AI solutions. Assist in the development and implementation of AI models and systems, leveraging techniques such as Language Models (LLMs) and generative AI. Collaborate with stakeholders to identify business opportunities and define AI project goals. Stay updated with the latest advancements in generative AI techniques, such as LLMs, and evaluate their potential applications in solving enterprise challenges. Utilize generative AI techniques, such as LLMs, to develop innovative solutions for enterprise industry use cases. Integrate with relevant APIs and libraries, such as Azure Open AI GPT models and Hugging Face Transformers, to leverage pre-trained models and enhance generative AI capabilities. Implement and optimize end-to-end pipelines for generative AI projects, ensuring seamless data processing and model deployment. Utilize vector databases, such as Redis, and NoSQL databases to efficiently handle large-scale generative AI datasets and outputs. Implement similarity search algorithms and techniques to enable efficient and accurate retrieval of relevant information from generative AI outputs. Collaborate with domain experts, stakeholders, and clients to understand specific business requirements and tailor generative AI solutions accordingly. Conduct research and evaluation of advanced AI techniques, including transfer learning, domain adaptation, and model compression, to enhance performance and efficiency. Establish evaluation metrics and methodologies to assess the quality, coherence, and relevance of generative AI outputs for enterprise industry use cases. Ensure compliance with data privacy, security, and ethical considerations in AI applications. Leverage data engineering skills to curate, clean, and preprocess large-scale datasets for generative AI applications. Requirements: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. A Ph.D. is a plus. Minimum 3-7 years of experience in Data Science and Machine Learning. In-depth knowledge of machine learning, deep learning, and generative AI techniques. Proficiency in programming languages such as Python, R, and frameworks like TensorFlow or PyTorch. Strong understanding of NLP techniques and frameworks such as BERT, GPT, or Transformer models. Familiarity with computer vision techniques for image recognition, object detection, or image generation. Experience with cloud platforms such as Azure, AWS, or GCP and deploying AI solutions in a cloud environment. Expertise in data engineering, including data curation, cleaning, and preprocessing. Knowledge of trusted AI practices, ensuring fairness, transparency, and accountability in AI models and systems. Strong collaboration with software engineering and operations teams to ensure seamless integration and deployment of AI models. Excellent problem-solving and analytical skills, with the ability to translate business requirements into technical solutions. Strong communication and interpersonal skills, with the ability to collaborate effectively with stakeholders at various levels. Understanding of data privacy, security, and ethical considerations in AI applications. Track record of driving innovation and staying updated with the latest AI research and advancements. Good to Have Skills: Apply trusted AI practices to ensure fairness, transparency, and accountability in AI models and systems. Utilize optimization tools and techniques, including MIP (Mixed Integer Programming). Drive DevOps and MLOps practices, covering continuous integration, deployment, and monitoring of AI models. Implement CI/CD pipelines for streamlined model deployment and scaling processes. Utilize tools such as Docker, Kubernetes, and Git to build and manage AI pipelines. Apply infrastructure as code (IaC) principles, employing tools like Terraform or CloudFormation. Implement monitoring and logging tools to ensure AI model performance and reliability. Collaborate seamlessly with software engineering and operations teams for efficient AI model integration and deployment. Familiarity with DevOps and MLOps practices, including continuous integration, deployment, and monitoring of AI models. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Description : EY GDS – Data and Analytics (D&A) – Senior – Senior Data Scientist Role Overview: We are seeking a highly skilled and experienced Senior Data Scientist with a minimum of 3 - 7 years of experience in Data Science and Machine Learning, preferably with experience in NLP, Generative AI, LLMs, MLOps, Optimization techniques, and AI solution Architecture. In this role, you will play a key role in the development and implementation of AI solutions, leveraging your technical expertise. The ideal candidate should have a deep understanding of AI technologies and experience in designing and implementing cutting-edge AI models and systems. Additionally, expertise in data engineering, DevOps, and MLOps practices will be valuable in this role. Responsibilities: Your technical responsibilities: Contribute to the design and implementation of state-of-the-art AI solutions. Assist in the development and implementation of AI models and systems, leveraging techniques such as Language Models (LLMs) and generative AI. Collaborate with stakeholders to identify business opportunities and define AI project goals. Stay updated with the latest advancements in generative AI techniques, such as LLMs, and evaluate their potential applications in solving enterprise challenges. Utilize generative AI techniques, such as LLMs, to develop innovative solutions for enterprise industry use cases. Integrate with relevant APIs and libraries, such as Azure Open AI GPT models and Hugging Face Transformers, to leverage pre-trained models and enhance generative AI capabilities. Implement and optimize end-to-end pipelines for generative AI projects, ensuring seamless data processing and model deployment. Utilize vector databases, such as Redis, and NoSQL databases to efficiently handle large-scale generative AI datasets and outputs. Implement similarity search algorithms and techniques to enable efficient and accurate retrieval of relevant information from generative AI outputs. Collaborate with domain experts, stakeholders, and clients to understand specific business requirements and tailor generative AI solutions accordingly. Conduct research and evaluation of advanced AI techniques, including transfer learning, domain adaptation, and model compression, to enhance performance and efficiency. Establish evaluation metrics and methodologies to assess the quality, coherence, and relevance of generative AI outputs for enterprise industry use cases. Ensure compliance with data privacy, security, and ethical considerations in AI applications. Leverage data engineering skills to curate, clean, and preprocess large-scale datasets for generative AI applications. Requirements: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. A Ph.D. is a plus. Minimum 3-7 years of experience in Data Science and Machine Learning. In-depth knowledge of machine learning, deep learning, and generative AI techniques. Proficiency in programming languages such as Python, R, and frameworks like TensorFlow or PyTorch. Strong understanding of NLP techniques and frameworks such as BERT, GPT, or Transformer models. Familiarity with computer vision techniques for image recognition, object detection, or image generation. Experience with cloud platforms such as Azure, AWS, or GCP and deploying AI solutions in a cloud environment. Expertise in data engineering, including data curation, cleaning, and preprocessing. Knowledge of trusted AI practices, ensuring fairness, transparency, and accountability in AI models and systems. Strong collaboration with software engineering and operations teams to ensure seamless integration and deployment of AI models. Excellent problem-solving and analytical skills, with the ability to translate business requirements into technical solutions. Strong communication and interpersonal skills, with the ability to collaborate effectively with stakeholders at various levels. Understanding of data privacy, security, and ethical considerations in AI applications. Track record of driving innovation and staying updated with the latest AI research and advancements. Good to Have Skills: Apply trusted AI practices to ensure fairness, transparency, and accountability in AI models and systems. Utilize optimization tools and techniques, including MIP (Mixed Integer Programming). Drive DevOps and MLOps practices, covering continuous integration, deployment, and monitoring of AI models. Implement CI/CD pipelines for streamlined model deployment and scaling processes. Utilize tools such as Docker, Kubernetes, and Git to build and manage AI pipelines. Apply infrastructure as code (IaC) principles, employing tools like Terraform or CloudFormation. Implement monitoring and logging tools to ensure AI model performance and reliability. Collaborate seamlessly with software engineering and operations teams for efficient AI model integration and deployment. Familiarity with DevOps and MLOps practices, including continuous integration, deployment, and monitoring of AI models. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 week ago
3.0 - 4.0 years
0 - 3 Lacs
Mohali
On-site
Job description least andand sci-kit-learnat leastand Experience: The ideal candidate should have a minimum of 3-4 years of professional experience working with Python, developing web applications and APIs. Proficiency in Python Frameworks: Strong knowledge and hands-on experience in Python frameworks such as Django, Flask, and FastAPI are essential. The candidate should be adept at building robust and scalable web applications. Basic AI Knowledge: Familiarity with the fundamentals of Artificial Intelligence and its application in Python would be highly beneficial. Experience with machine learning libraries like TensorFlow or scikit-learn is a plus. Database Skills: Basic understanding and experience working with databases (SQL and/or NoSQL) are required. Knowledge of ORMs (Object-Relational Mapping) like SQLAlchemy would be advantageous. Responsibilities for the Python Developer role:Responsibilities for the Python Developer role: - Collaborate with the development team to design, develop, and deploy high-quality Python-based applications. - Build and maintain efficient and reusable Python code. - Implement best practices for software development, including code reviews, automated testing, and documentation. - Work with databases and integrate them into applications, ensuring optimal performance and data integrity. - Research and apply AI concepts and techniques to enhance our products and services. Job Type: Full-time Benefits: Health insurance Education: Bachelor's (Preferred) Experience: Python: 1 year (Preferred) total work: 1 year (Preferred) Job Type: Full-time Pay: ₹8,086.00 - ₹25,000.00 per month Work Location: In person
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40175 Jobs | Dublin
Wipro
19626 Jobs | Bengaluru
Accenture in India
17497 Jobs | Dublin 2
EY
16057 Jobs | London
Uplers
11768 Jobs | Ahmedabad
Amazon
10704 Jobs | Seattle,WA
Oracle
9513 Jobs | Redwood City
IBM
9439 Jobs | Armonk
Bajaj Finserv
9311 Jobs |
Accenture services Pvt Ltd
8745 Jobs |