Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Role & Responsibilities : Database Development and Optimization : Design, develop, and optimize SQL databases, tables, views, and stored procedures to meet business requirements and performance goals. Data Retrieval and Analysis : Write efficient and high-performing SQL queries to retrieve, manipulate, and analyze data. Data Integrity and Security : Ensure data integrity, accuracy, and security through regular monitoring, backups, and data cleansing activities. Performance Tuning : Identify and resolve database performance bottlenecks, optimizing queries and database configurations. Error Resolution : Investigate and resolve database-related issues, including errors, connectivity problems, and data inconsistencies. Cross-Functional Collaboration : Collaborate with cross-functional teams, including Data Analysts, Software Developers, and Business Analysts, to support data-driven decision-making. Maintain comprehensive documentation of database schemas, processes, and procedures. Implement and maintain security measures to protect sensitive data and ensure compliance with data protection regulations. Assist in planning and executing database upgrades and migrations. To be considered for this role, you should have : Relevant work experience as a SQL Developer or in a similar role. Education : Bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent work experience). Technical Skills Proficiency in SQL, including T-SQL for Microsoft SQL Server or PL/SQL for Oracle. Strong knowledge of database design principles, normalization, and indexing. Experience with database performance tuning and optimization techniques. Excellent problem-solving skills and attention to detail. Strong communication and teamwork abilities. Ability to work independently and manage multiple tasks simultaneously. Desirable Skills Database Management Certifications : Certifications in database management (e. , Microsoft Certified : Azure Database Administrator Associate) are a plus. Data Warehousing Knowledge : Understanding of data warehousing concepts is a plus. (ref:hirist.tech)
Posted 5 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Key Responsibilities Enter alphabetic, numeric, or symbolic data from various sources into computer databases, spreadsheets, or other software Type accurately and efficiently, focusing on speed and precision Use keyboards, data recorders, scanners, or other data entry devices for data input Review and verify data for accuracy and completeness by comparing it to source documents Identify and correct errors or inconsistencies in data to maintain integrity Cross-reference information to ensure consistency and correctness across data sources Maintain and update existing data in databases and systems as needed Organize and file electronic and paper documents appropriately for easy retrieval Create and manage spreadsheets with large volumes of data, ensuring proper structure Ensure data is stored logically for easy access and retrieval when required Perform regular data backups to safeguard data and prevent loss Sort, categorize, and code data according to specific guidelines for accurate record-keeping Compile data and prepare basic reports or summaries for business or auditing purposes Assist in retrieving data for reports, audits, and other business needs as requested Adhere to company data entry procedures and comply with data protection regulations Maintain the confidentiality of sensitive information at all times Communicate with team members to clarify data requirements or resolve discrepancies Respond to requests for data retrieval from various stakeholders Collaborate with other departments to ensure data consistency and accuracy across systems Operate standard office equipment such as computers, scanners, printers, and fax machines efficiently Ensure the proper use and maintenance of data entry equipment to avoid downtime Identify opportunities to improve data entry processes and increase overall efficiency About Company: Velozity Global Solutions is not only a globally recognized IT company, it's a family representing togetherness for over two years of a successful journey. For Velozity, the definition of success is to transform innovative ideas of people to reality with the help of our tech expertise - this is what we as a team want to be remembered for. Our vision has led Velozity to become an emerging IT company in India & the USA for delivering industry-led mobility solutions. The goal is to empower clients and businesses by creating new possibilities leveraging the technologies of today and tomorrow with the utmost quality, satisfaction, and transparency. Our enthusiasm has led us to become a top IT company in India & the USA for delivering various industry-led mobility solutions in web and mobile application development domains, leveraging futuristic technologies like the Internet of Things (IoT), AI-ML, AR-VR, voice assistants, and voice skills, DevOps & cloud computing, etc.
Posted 5 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Join us as a Data and Governance Analyst at Barclays, responsible for supporting the successful delivery of Location Strategy projects to plan, budget, agreed quality and governance standards. You'll spearhead the evolution of our digital landscape, driving innovation and excellence. You will harness cutting-edge technology to revolutionise our digital offerings, ensuring unparalleled customer experiences. To be successful as a Data and Governance Analyst you should have experience with: Records and Data Governance: Knowledge of records management and data governance practices to ensure data integrity, privacy, and compliance with relevant regulations. Understanding of Data Warehousing Concepts: Familiarity with data warehousing principles and best practices to ensure efficient data retrieval and reporting. E.g. Data Quality Management: Ensuring high data quality is essential for reliable analysis and decision-making. This includes data cleansing, validation, and transformation. Data Analysis: Strong analytical skills to interpret complex datasets and provide actionable insights. Data Blending and Integration: Experience in blending and integrating data from multiple sources. Some Other Highly Valued Skills May Include Communication Skills: Excellent verbal and written communication skills to present data findings clearly and effectively to stakeholders. Project Management: Knowledge of agile practices including Scrum, Kanban, and waterfall as examples. Analytical Skills: Strong ability to interpret complex datasets and provide actionable insights. You may be assessed on the key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen strategic thinking and digital and technology, as well as job-specific technical skills. This role is based in Pune. Purpose of the role To optimise staffing levels, forecasts, scheduling and workforce allocation through data analysis to enhance the customer experience within the bank's operations. Accountabilities Management of staff optimisation levels, forecasting and scheduling resources by analysing data, business volume and trends to support the workforce allocation process. Collaboration with teams across the bank to align and integrate workforce management processes and governance. Development and implementation of workforce management strategies, processes and controls to mitigate risks and maintain efficient banking operations. Identification of areas for improvement and providing recommendations for change in workforce management processes and provide feedback and coaching for colleagues on these highlighted areas. Identification of industry trends and developments to implement best practice in workforce management Services. . Participation in projects and initiatives to improve workforce management efficiency and effectiveness. Development and management of staffing schedules to optimise staffing levels to meet business needs. Management of the operational readiness plans, supporting the business with meeting desired customer outcomes. Assistant Vice President Expectations To advise and influence decision making, contribute to policy development and take responsibility for operational effectiveness. Collaborate closely with other functions/ business divisions. Lead a team performing complex tasks, using well developed professional knowledge and skills to deliver on work that impacts the whole business function. Set objectives and coach employees in pursuit of those objectives, appraisal of performance relative to objectives and determination of reward outcomes If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others. OR for an individual contributor, they will lead collaborative assignments and guide team members through structured assignments, identify the need for the inclusion of other areas of specialisation to complete assignments. They will identify new directions for assignments and/ or projects, identifying a combination of cross functional methodologies or practices to meet required outcomes. Consult on complex issues; providing advice to People Leaders to support the resolution of escalated issues. Identify ways to mitigate risk and developing new policies/procedures in support of the control and governance agenda. Take ownership for managing risk and strengthening controls in relation to the work done. Perform work that is closely related to that of other areas, which requires understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Collaborate with other areas of work, for business aligned support areas to keep up to speed with business activity and the business strategy. Engage in complex analysis of data from multiple sources of information, internal and external sources such as procedures and practises (in other areas, teams, companies, etc).to solve problems creatively and effectively. Communicate complex information. 'Complex' information could include sensitive information or information that is difficult to communicate because of its content or its audience. Influence or convince stakeholders to achieve outcomes. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave.
Posted 5 days ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Skills: Python, Spark, Data Engineer, Cloudera, Onpremise, Azure, Snlowfow, Kafka, Overview Of The Company Jio Platforms Ltd. is a revolutionary Indian multinational tech company, often referred to as India's biggest startup, headquartered in Mumbai. Launched in 2019, it's the powerhouse behind Jio, India's largest mobile network with over 400 million users. But Jio Platforms is more than just telecom. It's a comprehensive digital ecosystem, developing cutting-edge solutions across media, entertainment, and enterprise services through popular brands like JioMart, JioFiber, and JioSaavn. Join us at Jio Platforms and be part of a fast-paced, dynamic environment at the forefront of India's digital transformation. Collaborate with brilliant minds to develop next-gen solutions that empower millions and revolutionize industries. Team Overview The Data Platforms Team is the launchpad for a data-driven future, empowering the Reliance Group of Companies. We're a passionate group of experts architecting an enterprise-scale data mesh to unlock the power of big data, generative AI, and ML modelling across various domains. We don't just manage data we transform it into intelligent actions that fuel strategic decision-making. Imagine crafting a platform that automates data flow, fuels intelligent insights, and empowers the organization that's what we do. Join our collaborative and innovative team, and be a part of shaping the future of data for India's biggest digital revolution! About the role. Title: Lead Data Engineer Location: Mumbai Responsibilities End-to-End Data Pipeline Development: Design, build, optimize, and maintain robust data pipelines across cloud, on-premises, or hybrid environments, ensuring performance, scalability, and seamless data flow. Reusable Components & Frameworks: Develop reusable data pipeline components and contribute to the team's data pipeline framework evolution. Data Architecture & Solutions: Contribute to data architecture design, applying data modelling, storage, and retrieval expertise. Data Governance & Automation: Champion data integrity, security, and efficiency through metadata management, automation, and data governance best practices. Collaborative Problem Solving: Partner with stakeholders, data teams, and engineers to define requirements, troubleshoot, optimize, and deliver data-driven insights. Mentorship & Knowledge Transfer: Guide and mentor junior data engineers, fostering knowledge sharing and professional growth. Qualification Details Education: Bachelor's degree or higher in Computer Science, Data Science, Engineering, or a related technical field. Core Programming: Excellent command of a primary data engineering language (Scala, Python, or Java) with a strong foundation in OOPS and functional programming concepts. Big Data Technologies: Hands-on experience with data processing frameworks (e.g., Hadoop, Spark, Apache Hive, NiFi, Ozone, Kudu), ideally including streaming technologies (Kafka, Spark Streaming, Flink, etc.). Database Expertise: Excellent querying skills (SQL) and strong understanding of relational databases (e.g., MySQL, PostgreSQL). Experience with NoSQL databases (e.g., MongoDB, Cassandra) is a plus. End-to-End Pipelines: Demonstrated experience in implementing, optimizing, and maintaining complete data pipelines, integrating varied sources and sinks including streaming real-time data. Cloud Expertise: Knowledge of Cloud Technologies like Azure HDInsights, Synapse, EventHub and GCP DataProc, Dataflow, BigQuery. CI/CD Expertise: Experience with CI/CD methodologies and tools, including strong Linux and shell scripting skills for automation. Desired Skills & Attributes Problem-Solving & Troubleshooting: Proven ability to analyze and solve complex data problems, troubleshoot data pipeline issues effectively. Communication & Collaboration: Excellent communication skills, both written and verbal, with the ability to collaborate across teams (data scientists, engineers, stakeholders). Continuous Learning & Adaptability: A demonstrated passion for staying up-to-date with emerging data technologies and a willingness to adapt to new tools.
Posted 5 days ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Skills: Data Engineer, Spark, Scala, Python, Onpremise, Cloudera, Snowflake, kafka, Overview Of The Company Jio Platforms Ltd. is a revolutionary Indian multinational tech company, often referred to as India's biggest startup, headquartered in Mumbai. Launched in 2019, it's the powerhouse behind Jio, India's largest mobile network with over 400 million users. But Jio Platforms is more than just telecom. It's a comprehensive digital ecosystem, developing cutting-edge solutions across media, entertainment, and enterprise services through popular brands like JioMart, JioFiber, and JioSaavn. Join us at Jio Platforms and be part of a fast-paced, dynamic environment at the forefront of India's digital transformation. Collaborate with brilliant minds to develop next-gen solutions that empower millions and revolutionize industries. Team Overview The Data Platforms Team is the launchpad for a data-driven future, empowering the Reliance Group of Companies. We're a passionate group of experts architecting an enterprise-scale data mesh to unlock the power of big data, generative AI, and ML modelling across various domains. We don't just manage data we transform it into intelligent actions that fuel strategic decision-making. Imagine crafting a platform that automates data flow, fuels intelligent insights, and empowers the organization that's what we do. Join our collaborative and innovative team, and be a part of shaping the future of data for India's biggest digital revolution! About the role. Title : Senior Data Engineer Location : Mumbai Responsibilities End-to-End Data Pipeline Development: Design, build, optimize, and maintain robust data pipelines across cloud, on-premises, or hybrid environments, ensuring performance, scalability, and seamless data flow. Reusable Components & Frameworks: Develop reusable data pipeline components and contribute to the team's data pipeline framework evolution. Data Architecture & Solutions: Contribute to data architecture design, applying data modelling, storage, and retrieval expertise. Data Governance & Automation: Champion data integrity, security, and efficiency through metadata management, automation, and data governance best practices. Collaborative Problem Solving: Partner with stakeholders, data teams, and engineers to define requirements, troubleshoot, optimize, and deliver data-driven insights. Mentorship & Knowledge Transfer: Guide and mentor junior data engineers, fostering knowledge sharing and professional growth. Qualification Details Education: Bachelor's degree or higher in Computer Science, Data Science, Engineering, or a related technical field. Core Programming: Excellent command of a primary data engineering language (Scala, Python, or Java) with a strong foundation in OOPS and functional programming concepts. Big Data Technologies: Hands-on experience with data processing frameworks (e.g., Hadoop, Spark, Apache Hive, NiFi, Ozone, Kudu), ideally including streaming technologies (Kafka, Spark Streaming, Flink, etc.). Database Expertise: Excellent querying skills (SQL) and strong understanding of relational databases (e.g., MySQL, PostgreSQL). Experience with NoSQL databases (e.g., MongoDB, Cassandra) is a plus. End-to-End Pipelines: Demonstrated experience in implementing, optimizing, and maintaining complete data pipelines, integrating varied sources and sinks including streaming real-time data. Cloud Expertise: Knowledge of Cloud Technologies like Azure HDInsights, Synapse, EventHub and GCP DataProc, Dataflow, BigQuery. CI/CD Expertise: Experience with CI/CD methodologies and tools, including strong Linux and shell scripting skills for automation. Desired Skills & Attributes Problem-Solving & Troubleshooting: Proven ability to analyze and solve complex data problems, troubleshoot data pipeline issues effectively. Communication & Collaboration: Excellent communication skills, both written and verbal, with the ability to collaborate across teams (data scientists, engineers, stakeholders). Continuous Learning & Adaptability: A demonstrated passion for staying up-to-date with emerging data technologies and a willingness to adapt to new tools.
Posted 5 days ago
0 years
1 Lacs
Mumbai Metropolitan Region
On-site
Skills: Recruitment Support, Onboarding, Employee Records, Attendance Management, Compliance Documentation, Employee Engagement, Company Overview Sir H. N. Reliance Foundation Hospital and Research Centre, headquartered in Mumbai, is a leading multi-specialty tertiary care hospital offering world-class healthcare services. With a focus on cutting-edge technology and international standards, the hospital specializes in areas such as Cardiac Sciences, Oncology, and Neuro Sciences among others. It is renowned for its state-of-the-art medical mall and a rich legacy of over a century in healthcare excellence and innovation. Job Overview We are seeking a Human Resource Intern for a 3 to 6-month internship at our Mumbai location. This position is ideal for freshers with a keen interest in HR practices within the healthcare industry. The successful candidate will gain hands-on experience in various HR functions and support the HR team in process execution and improvement. This role requires a proactive approach, attention to detail, and a passion for people management. Qualifications And Skills MBA in HR. - 2025 or 2026 pass out Strong interest in human resources and a willingness to learn about HR practices in a hospital setting. Basic understanding of recruitment processes and ability to provide support during hiring campaigns. Proficiency in maintaining and updating employee records, ensuring accuracy and privacy. Ability to assist with onboarding processes to ensure a smooth transition for new hires. Experience with attendance management systems and the capacity to address related queries. Proficiency in preparing compliance documents and ensuring adherence to HR guidelines. Strong communication skills to engage with employees and assist in fostering a positive workplace environment. Detail-oriented with strong organizational skills to manage diverse HR tasks effectively. Roles And Responsibilities Assist in the recruitment process by coordinating interviews, managing applications, and communicating with candidates. Support HR staff with onboarding processes, including preparation of orientation materials and conducting introductions. Maintain and update employee records accurately in the database for easy retrieval and reporting. Assist in attendance management by tracking, verifying, and addressing any discrepancies. Ensure compliance documentation is accurate, complete, and filed in accordance with legal requirements. Engage with employees through various engagement activities and feedback mechanisms. Support the development and implementation of HR policies and procedures. Provide general administrative support to the HR team as necessary, ensuring efficient daily operations.
Posted 5 days ago
0 years
0 Lacs
Dimapur, Nagaland, India
On-site
The University of Hong Kong Apply now Ref.: 532152 Work type: Full-time Department: School of Public Health (22400) Categories: Senior Research Staff & Post-doctoral Fellow Hong Kong Applications are invited for appointment as Research Assistant Professor (RAP)/ Post-doctoral Fellow (PDF) in the Division of Epidemiology and Biostatistics, School of Public Health (Ref.: 532152), to commence on 1 November 2025, on a two- to three-year fixed-term basis for RAP, or a one- to three-year temporary basis for PDF, with the possibility of renewal subject to satisfactory performance and funding availability. Applicants should possess a PhD in epidemiology, biostatistics, public health or related disciplines. They must demonstrate an outstanding academic background with publication records in high-impact peer-reviewed journals. Essential qualifications include advanced expertise in developing AI applications using offline/online LLMs (e.g., GPT, Qwen, DeepSeek, Mistral, Llama, Gemma) within Linux environments. They must also demonstrate proficiency in clinical data annotation (brat), Python scripting, and integrating optimization techniques such as fine-tuning, Chain-of-Thought, and Retrieval Augmented Generation to enhance LLM outputs. Experience in creating oncology-specific NLP models with peer-reviewed publications is preferred. Strong quantitative skills are mandatory, including analysing large-scale databases (e.g., Hospital Authority EHR) using R/STATA/SAS and conducting cost-effectiveness analyses. Applicants must have a track record in securing competitive grants as principal investigators, managing IRB processes, patient recruitment, and media engagement through press releases. Exceptional bilingual communication skills (English/Chinese) and the ability to lead multidisciplinary collaborations are required. Those with significant post-doctoral experience and outstanding publications may be appointed as RAP. The appointee will spearhead the development of interactive AI clinical decision support systems that translate LLM outputs into clinical management tools. This role involves designing and executing large-scale epidemiological studies using EHR data, overseeing clinical data annotation/processing from public and Hospital Authority sources, and leading patient recruitment/follow-up initiatives. Academic responsibilities include disseminating findings via publications and conferences, preparing IRB applications, drafting press releases, and driving grant applications from ideation to submission. The appointee is also expected to supervise research staff, manage project alignment, and undertake administrative duties as assigned. Enquiries about the duties of the post should be sent to Ms Audrey Ho at audreyh@hku.hk. A highly competitive salary commensurate with qualifications and experience will be offered, in addition to annual leave and medical benefits. The appointment on fixed terms will attract a contract-end gratuity and University contribution to a retirement benefits scheme at 15% of basic salary for RAP. The University only accepts online applications for the above post. Applicants should apply online and upload an up-to-date CV. Review of applications will start on June 19, 2025, and continue until September 4, 2025 , or until the post is filled, whichever is earlier. Advertised: Jun 5, 2025 (HK Time) Applications close: Sep 4, 2025 (HK Time) Back to search results Apply now Whatsapp Facebook LinkedIn Email App
Posted 5 days ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Minimum qualifications: Bachelor's degree in Computer Engineering, Computer Science, a related field, or equivalent practical experience. 8 years of experience with one or more of the following: Linux kernel, device drivers, git/gerrit, system integration. 3 years of experience in a technical leadership role; overseeing projects, with 2 years of experience in a people management, supervision/team leadership role. Experience developing with C/C++ in areas such as low-level systems development, synchronization, memory allocation, performance, and multi-threading. Preferred qualifications: Master's degree in Computer Engineering, Computer Science, or a related field. Experience with system software in any of the following areas - ARM/ARM64 architecture, compilers, firmware, Operating systems, Linux kernel, filesystems/storage, device drivers, performance tuning, networking, tools, tests, virtualization, platform libraries, etc. Experience working with operating systems, computer architecture, embedded systems and Linux/Unix kernel, etc. Experience developing and designing large software systems. Experience in coding C or C++. Knowledge of the Android platform. About The Job Google's software engineers develop the next-generation technologies that change how billions of users connect, explore, and interact with information and one another. Our products need to handle information at massive scale, and extend well beyond web search. We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day. As a software engineer, you will work on a specific project critical to Google’s needs with opportunities to switch teams and projects as you and our fast-paced business grow and evolve. We need our engineers to be versatile, display leadership qualities and be enthusiastic to take on new problems across the full-stack as we continue to push technology forward. As a member of the Android Systems team, you will pioneer, develop and build out our footprint in consumer hardware/software. You will contribute to the core of Android. You will work on a variety of open source projects including the Linux kernel, Android operating system, and build the future of Android together with our large partner ecosystem. You will work on areas such as storage, filesystems, low-level performance, and systems software. You will be contributing to Android's updatability, security and quality while working alongside leading domain experts from various areas. Areas of development may include the Linux kernel, device drivers, operating systems, virtualization, inter-process communication, performance optimizations, over-the-air update technology, and the Android core framework. Android is Google’s open-source mobile operating system powering more than 3 billion devices worldwide. Android is about bringing computing to everyone in the world. We believe computing is a super power for good, enabling access to information, economic opportunity, productivity, connectivity between friends and family and more. We think everyone in the world should have access to the best computing has to offer. We provide the platform for original equipment manufacturers (OEMs) and developers to build compelling computing devices (smartphones, tablets, TVs, wearables, etc) that run the best apps/services for everyone in the world. Responsibilities Design, develop and roll out features for billions of users. Work on core system components including storage, filesystems, updatability, and virtualization. Create and ship Generic Kernel Image (GKI) for next generation devices with billions of users already. Scale development across a growing number of verticals (e.g., Wear, Auto, TV, large screen, etc.). Work with our Android partners that ship hundreds of millions of Android devices each year. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form .
Posted 5 days ago
2.0 - 4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Software Engineering Roles – Multiple Levels Level 1: Job Title: Software Engineer | 2-4 Years Of Experience | 5 Positions Level 2: Job Title: Associate Software Engineer | 5-9 Years Of Experience | 3 Positions Level 3: Job Title: Staff Software Engineer | 12-15 Years Of Experience | 2 Positions Location: Chennai, Tamil Nadu, India Duration: FTE / Permanent Type: On-Site The Challenge: We are looking for skilled Software Engineers at multiple levels to join our team, where you'll play a key role in developing and maintaining high-performance, event-driven systems for real-time applications. You'll work closely with the team to implement microservices, optimize code performance, and contribute to the overall success of our technical projects. Tech Stack to Focus: • JAVA • Spring Boot • Microservices • Kafka • Hadoop • SQL and NoSQL Roles & Responsibilities: Component Development: Collaborate in developing and maintaining components of high-performance, real-time systems, following the guidance of senior team members. Microservices Implementation: Build microservices using Java, Python, or Go, adhering to established architectural patterns for scalability and resilience. Performance Optimization: Enhance code performance by focusing on efficient memory management, concurrency, and I/O operations to meet demanding performance standards. Database Management: Work with both SQL and NoSQL databases to create efficient data storage and retrieval solutions for high-volume environments. Real-Time Analytics: Assist in developing real-time analytics features, contributing to the creation of insightful visualizations for stakeholders. Monitoring & Alerting: Participate in developing monitoring and alerting solutions, with a focus on key performance indicators and system health metrics. Infrastructure as Code (IaC): Support the implementation of IaC practices, helping to create and maintain deployment scripts for consistent and reliable deployments. Container Orchestration: Contribute to container orchestration strategies, focusing on efficient resource utilization and auto-scaling. Caching & Data Access: Implement and optimize caching strategies and data access patterns to improve system responsiveness. Code Reviews: Engage in code reviews, offering constructive feedback and incorporating suggestions to enhance code quality. Production Support: Assist in troubleshooting and resolving production issues, including participating in on-call rotations as required. Technical Documentation: Contribute to technical documentation, ensuring that system designs and implementations are clearly documented. Proof-of-Concept Projects: Participate in proof-of-concept initiatives, researching and implementing new technologies under the guidance of senior engineers. Knowledge Sharing: Actively participate in team knowledge-sharing sessions, presenting on new technologies and best practices. Essential Skills & Requirements: Educational Background: Bachelor’s degree in Computer Science or a related field. Technical Proficiency: Strong skills in at least one major programming language (Java, Python, or Go), with a focus on writing clean, maintainable code. Microservices & Event-Driven Systems: Experience with microservices architecture and event-driven systems. Distributed Systems: Solid understanding of distributed systems concepts and associated challenges. Database Skills: Practical experience working with both SQL and NoSQL databases. Cloud & Containerization: Familiarity with cloud platforms (AWS, Azure, GCP) and containerization technologies like Docker. Big Data: Basic understanding of big data technologies such as Hadoop, Spark, or Kafka. Version Control & CI/CD: Experience with version control systems (preferably Git) and CI/CD pipelines. Problem-Solving: Strong problem-solving abilities and experience in debugging complex issues. Communication & Teamwork: Excellent communication skills and a proven ability to work effectively within a team. Continuous Learning: Eagerness to learn new technologies and adapt to changing methodologies. Agile Practices: Basic understanding of agile development practices
Posted 5 days ago
5.0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
Job Summary We’re seeking a hands-on GenAI & Computer Vision Engineer with 3–5 years of experience delivering production-grade AI solutions. You must be fluent in the core libraries, tools, and cloud services listed below, and able to own end-to-end model development—from research and fine-tuning through deployment, monitoring, and iteration. In this role, you’ll tackle domain-specific challenges like LLM hallucinations, vector search scalability, real-time inference constraints, and concept drift in vision models. Key Responsibilities Generative AI & LLM Engineering Fine-tune and evaluate LLMs (Hugging Face Transformers, Ollama, LLaMA) for specialized tasks Deploy high-throughput inference pipelines using vLLM or Triton Inference Server Design agent-based workflows with LangChain or LangGraph, integrating vector databases (Pinecone, Weaviate) for retrieval-augmented generation Build scalable inference APIs with FastAPI or Flask, managing batching, concurrency, and rate-limiting Computer Vision Development Develop and optimize CV models (YOLOv8, Mask R-CNN, ResNet, EfficientNet, ByteTrack) for detection, segmentation, classification, and tracking Implement real-time pipelines using NVIDIA DeepStream or OpenCV (cv2); optimize with TensorRT or ONNX Runtime for edge and cloud deployments Handle data challenges—augmentation, domain adaptation, semi-supervised learning—and mitigate model drift in production MLOps & Deployment Containerize models and services with Docker; orchestrate with Kubernetes (KServe) or AWS SageMaker Pipelines Implement CI/CD for model/version management (MLflow, DVC), automated testing, and performance monitoring (Prometheus + Grafana) Manage scalability and cost by leveraging cloud autoscaling on AWS (EC2/EKS), GCP (Vertex AI), or Azure ML (AKS) Cross-Functional Collaboration Define SLAs for latency, accuracy, and throughput alongside product and DevOps teams Evangelize best practices in prompt engineering, model governance, data privacy, and interpretability Mentor junior engineers on reproducible research, code reviews, and end-to-end AI delivery Required Qualifications You must be proficient in at least one tool from each category below: LLM Frameworks & Tooling: Hugging Face Transformers, Ollama, vLLM, or LLaMA Agent & Retrieval Tools: LangChain or LangGraph; RAG with Pinecone, Weaviate, or Milvus Inference Serving: Triton Inference Server; FastAPI or Flask Computer Vision Frameworks & Libraries: PyTorch or TensorFlow; OpenCV (cv2) or NVIDIA DeepStream Model Optimization: TensorRT; ONNX Runtime; Torch-TensorRT MLOps & Versioning: Docker and Kubernetes (KServe, SageMaker); MLflow or DVC Monitoring & Observability: Prometheus; Grafana Cloud Platforms: AWS (SageMaker, EC2/EKS) or GCP (Vertex AI, AI Platform) or Azure ML (AKS, ML Studio) Programming Languages: Python (required); C++ or Go (preferred) Additionally Bachelor’s or Master’s in Computer Science, Electrical Engineering, AI/ML, or a related field 3–5 years of professional experience shipping both generative and vision-based AI models in production Strong problem-solving mindset; ability to debug issues like LLM drift, vector index staleness, and model degradation Excellent verbal and written communication skills Typical Domain Challenges You’ll Solve LLM Hallucination & Safety: Implement grounding, filtering, and classifier layers to reduce false or unsafe outputs Vector DB Scaling: Maintain low-latency, high-throughput similarity search as embeddings grow to millions Inference Latency: Balance batch sizing and concurrency to meet real-time SLAs on cloud and edge hardware Concept & Data Drift: Automate drift detection and retraining triggers in vision and language pipelines Multi-Modal Coordination: Seamlessly orchestrate data flow between vision models and LLM agents in complex workflows About Company Hi there! We are Auriga IT. We power businesses across the globe through digital experiences, data and insights. From the apps we design to the platforms we engineer, we're driven by an ambition to create world-class digital solutions and make an impact. Our team has been part of building the solutions for the likes of Zomato, Yes Bank, Tata Motors, Amazon, Snapdeal, Ola, Practo, Vodafone, Meesho, Volkswagen, Droom and many more. We are a group of people who just could not leave our college-life behind and the inception of Auriga was solely based on a desire to keep working together with friends and enjoying the extended college life. Who Has not Dreamt of Working with Friends for a Lifetime Come Join In Our Website - https://aurigait.com/
Posted 5 days ago
6.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
About the Company: Clarifai is a leading, full-lifecycle deep learning AI platform for computer vision, natural language processing, and audio recognition. We help organizations transform unstructured images, video, text, and audio data into structured data at a significantly faster and more accurate rate than humans would be able to do on their own. Founded in 2013 by Matt Zeiler, Ph.D. Clarifai has been a market leader in AI since winning the top five places in image classification at the 2013 ImageNet Challenge. Clarifai continues to grow with employees remotely based throughout the United States and in Tallinn, Estonia. We have raised $100M in funding to date, with $60M coming from our most recent Series C, and are backed by industry leaders like Menlo Ventures, Union Square Ventures, Lux Capital, New Enterprise Associates, LDV Capital, Corazon Capital, Google Ventures, NVIDIA, Qualcomm and Osage. Clarifai is proud to be an equal opportunity workplace dedicated to pursuing, hiring, and retaining a diverse workforce. The Opportunity: As a Senior Engineer, you build the systems and services behind the Clarifai magic. You will focus on the development of the model workflow engine and of Retrieval Augmented Generation (RAG) systems. Impact: You build the systems and services that will power some of Clarifai's newest offerings. They will enable customers to perform automated tasks and synthesise internal information using LLMs and other models. Requirements: Minimum of 6 years of backend software development experience required. Proficiency in one or more object-oriented programming languages and relational database management systems. Ability to manage multiple projects simultaneously is highly valued at Clarifai. Thrives in a fast-paced work environment. Experience working on distributed teams is preferred, with strong communication skills and transparency being key. Enjoys mentoring junior engineers and interns. Familiarity with Agile methodologies is a plus. Great to Have: Experience with GO or Python ML related experience Experience with Kubernetes
Posted 5 days ago
4.0 years
0 Lacs
Ahmedabad, Gujarat, India
Remote
You're an engineer who doesn't tolerate bad code, slow deployments, or outdated development practices. You don't just write software—you build AI-driven systems that make traditional engineering look sluggish and inefficient. If that sounds like you, keep reading. Most software teams are still writing code the old way—manual debugging, trial-and-error deployments, and features that barely leverage AI beyond a sprinkle of Copilot suggestions. Trilogy is different. Every part of our development process is infused with AI: feature ideation, bug detection, defect resolution, and performance optimization. This isn't about dabbling in AI tools; this is about fully integrating AI into the software lifecycle to eliminate waste, ship faster, and build with precision. In this role, you will take existing B2B products, break them down, rebuild them as cloud-native applications, and optimize them with AI at every level. You'll be developing and deploying AI-powered features, using cutting-edge retrieval-augmented generation (RAG) for automated defect detection, and ensuring every release is smooth—zero outages, zero disruptions, zero excuses. If you're looking for a job where you spend weeks debating architecture decisions instead of shipping, this isn't it. If you're ready to push the boundaries of AI-driven software engineering and accelerate your career into high-scale cloud-native development, apply now. If you prefer sticking to what you already know, writing test cases by hand, or working in teams that are afraid of automation, this isn't for you. What You Will Be Doing Using analysis tools and RAG vector stores to identify, diagnose, and correct product defects and fix bugs. Leveraging AI development agents to design, develop, and deploy innovative features for cloud-native applications. Collaborating with a global team to deliver high-quality, enterprise-grade solutions. What You Won’t Be Doing Routine Monotony: We keep cumbersome infrastructure tasks to a minimum so you can focus on creating innovative solutions. Endless Meetings: We value your expertise in development over sitting in meeting rooms. Expect more coding, less talking. C# Software Developer key responsibilities Implement AI-driven features to streamline workflows and empower service providers with innovative tools. Basic Requirements 4+ years of professional experience in commercial software development, focusing on production code for server-side web applications Experience using GenAI code assistants (e.g., Github Copilot, Cursor, v0.dev) A willingness to use GenAI in your day-to-day development work About Trilogy Hundreds of software businesses run on the Trilogy Business Platform. For three decades, Trilogy has been known for 3 things: Relentlessly seeking top talent, Innovating new technology, and incubating new businesses. Our technological innovation is spearheaded by a passion for simple customer-facing designs. Our incubation of new businesses ranges from entirely new moon-shot ideas to rearchitecting existing projects for today's modern cloud-based stack. Trilogy is a place where you can be surrounded with great people, be proud of doing great work, and grow your career by leaps and bounds. There is so much to cover for this exciting role, and space here is limited. Hit the Apply button if you found this interesting and want to learn more. We look forward to meeting you! Working with us This is a full-time (40 hours per week), long-term position. The position is immediately available and requires entering into an independent contractor agreement with Crossover as a Contractor of Record. The compensation level for this role is $30 USD/hour, which equates to $60,000 USD/year assuming 40 hours per week and 50 weeks per year. The payment period is weekly. Consult www.crossover.com/help-and-faqs for more details on this topic. Crossover Job Code: LJ-3889-IN-Ahmedaba-C#SoftwareDeve.001
Posted 5 days ago
4.0 years
0 Lacs
Lucknow, Uttar Pradesh, India
Remote
Elite C# Developers Only: Transform Legacy Systems into AI-Powered Cloud Machines Traditional development is dying. While most engineers still wrestle with manual debugging and clumsy deployments, forward-thinking developers are leveraging AI to eliminate waste and ship with unprecedented speed. At Trilogy, we're not just adopting AI tools—we're weaponizing them across the entire development lifecycle. This role demands excellence in rebuilding B2B products as cloud-native applications with AI integration at every level. You'll implement cutting-edge retrieval-augmented generation (RAG), automate defect detection, and deploy with zero tolerance for disruption. This position is for engineers who ship, not theorize. What You Will Be Doing Harness advanced RAG vector stores and AI analysis tools to identify, diagnose, and eliminate product defects with surgical precision Architect and implement AI development agents that revolutionize how features are designed, developed, and deployed in cloud environments Drive collaboration within our elite global engineering team to deliver enterprise-grade solutions that set new standards in the industry What You Won’t Be Doing Wrestling with infrastructure: Our streamlined processes eliminate tedious configuration tasks so you can focus on high-impact development Wasting time in meetings: We prioritize execution over discussion—your code speaks louder than words Following outdated development practices: This role demands innovation, not adherence to legacy methodologies Senior C# Developer key responsibilities Transform business operations by implementing AI-powered features that dramatically streamline workflows and deliver unprecedented value to service providers. Basic Requirements Minimum 4 years of professional experience building production-grade server-side web applications with commercial impact Demonstrated proficiency with GenAI code assistants (Github Copilot, Cursor, v0.dev) Commitment to integrating GenAI tools into your daily development workflow to maximize efficiency and output quality Advanced C# skills with a proven track record of delivering robust, scalable solutions About Trilogy Hundreds of software businesses run on the Trilogy Business Platform. For three decades, Trilogy has been known for 3 things: Relentlessly seeking top talent, Innovating new technology, and incubating new businesses. Our technological innovation is spearheaded by a passion for simple customer-facing designs. Our incubation of new businesses ranges from entirely new moon-shot ideas to rearchitecting existing projects for today's modern cloud-based stack. Trilogy is a place where you can be surrounded with great people, be proud of doing great work, and grow your career by leaps and bounds. There is so much to cover for this exciting role, and space here is limited. Hit the Apply button if you found this interesting and want to learn more. We look forward to meeting you! Working with us This is a full-time (40 hours per week), long-term position. The position is immediately available and requires entering into an independent contractor agreement with Crossover as a Contractor of Record. The compensation level for this role is $30 USD/hour, which equates to $60,000 USD/year assuming 40 hours per week and 50 weeks per year. The payment period is weekly. Consult www.crossover.com/help-and-faqs for more details on this topic. Crossover Job Code: LJ-3889-IN-Lucknow-SeniorC#Develo
Posted 5 days ago
1.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Growexx is looking for a smart and passionate Data Scientist , who will empower Marketing, Product, and Sales teams to make strategic, data-driven decisions. Key Responsibilities Mine, process, and analyze web, product, sales, and digital marketing data at an event level. Utilize traditional machine learning techniques and language models (LLMs) to build great AI agents for different business needs. Assist in developing and optimizing LLM-driven solutions for tasks such as text summarization and basic customer support automation. Contribute to building and deploying predictive models and machine learning algorithms across customer profile and usage datasets. Support the deployment of machine learning models into production environments. Assist in designing and implementing basic model activation strategies and participating in A/B testing plans. Conduct evaluations of LLMs, focusing on basic performance metrics like accuracy and latency. Integrate LLM agents with APIs and assist in maintaining data models and improving taxonomy. Key Skills Hands on experience with LLM models and basic knowledge of evaluation metrics for LLMs. Knowledge of designing and deploying agentic systems. knowledge bases, retrieval systems (RAG architecture), and orchestrating dynamic multi-agent workflows. Exposure to machine learning techniques including supervised and unsupervised learning Proficiency in Python, SciKit, SQL, Jupyter Notebooks and understanding of cloud platforms for data science tasks. Basic understanding of data mining and statistical analysis techniques. Continuous learner, keeping up-to-date with the latest advances in transformers, generative AI models, retrieval-augmented generation (RAG), and agentic AI frameworks. Education and Experience B Tech or B. E. (Computer Science / Information Technology) 1+ years as a Data Scientist or similar roles. Analytical and Personal skills Must have good logical reasoning and analytical skills. Good Communication skills in English – both written and verbal. Demonstrate Ownership and Accountability of their work. Attention to detail.
Posted 5 days ago
0 years
3 - 4 Lacs
Bengaluru, Karnataka, India
On-site
We're seeking a Backend Intern with Python expertise to join our robotics software team. You'll be responsible for designing and implementing robust backend services that power our construction robotics platform, from data processing pipelines to robot telemetry systems. Core Responsibilities Develop robust backend services and APIs using Python and FastAPI to handle robot telemetry, control systems, and data processing Create efficient data pipelines for processing large volumes of robot sensor data, quality metrics, and operational analytics Architect and maintain AWS-based infrastructure for robot fleet management and data storage Design and implement database schemas for storing robot operational data, site information, and user management Build scalable solutions for real-time data storage and retrieval from robots in the field Collaborate with robotics engineers to integrate robot control systems with backend services Required Skills & Experience: Strong Python development skills with experience in FastAPI or similar frameworks Solid understanding of database design with PostgreSQL and SQLite Experience with AWS services (EC2, S3, Elastic Beanstalk) Knowledge of Docker containerization and deployment Understanding of RESTful API design principles Experience with data pipeline development and large-scale data processing Familiarity with cloud infrastructure management and scalability Why Join Us: Join a dynamic startup and work directly with the founders to shape the future of AI & Robotics Be part of a mission to create intelligent robots that eliminate the need for human labor in harsh and unsafe environments Join a team that values open feedback, encourages wild ideas, and keeps hierarchy out of brainstorming Experience the thrill of building not just a product, but a company from the ground up Requirements Python, Docker, AWS, FastAPI Benefits Health Insurance, PF,, flexible working hours
Posted 5 days ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Title: Gen AI Engineer – GenAI / ML (Python, Langchain) Location: Gurgaon/ Chennai / Pune / Bangalore / Noida (Onsite) Fulltime. Primary Location: Gurgaon Secondary Location : Chennai / Pune / Bangalore / Noida (Onsite) Job Description: Overall Experience: 5–8 Years Focus: Hands-on engineering role focused on designing, building, and deploying Generative AI and LLM-based solutions. The role requires deep technical proficiency in Python and modern LLM frameworks with the ability to contribute to roadmap development and cross-functional collaboration. Key Responsibilities: • Design and develop GenAI/LLM-based systems using tools such as Langchain and Retrieval-Augmented Generation (RAG) pipelines. • Implement prompt engineering techniques and agent-based frameworks to deliver intelligent, context-aware solutions. • Collaborate with the engineering team to shape and drive the technical roadmap for LLM initiatives. • Translate business needs into scalable, production-ready AI solutions. • Work closely with business SMEs and data teams to ensure alignment of AI models with real-world use cases. • Contribute to architecture discussions, code reviews, and performance optimization. Skills Required: • Proficient in Python, Langchain, and SQL. • Understanding of LLM internals, including prompt tuning, embeddings, vector databases, and agent workflows. • Background in machine learning or software engineering with a focus on system-level thinking. • Experience working with cloud platforms like AWS, Azure, or GCP. • Ability to work independently while collaborating effectively across teams. • Excellent communication and stakeholder management skills. Preferred Qualifications: • 1+ years of hands-on experience in LLMs and Generative AI techniques. • Experience contributing to ML/AI product pipelines or end-to-end deployments. • Familiarity with MLOps and scalable deployment patterns for AI models. • Prior exposure to client-facing projects or cross-functional AI teams.
Posted 5 days ago
2.0 - 4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
The Ops Support Specialist 5 is a position responsible for providing operations support services, including but not limited to; record/documentation maintenance, storage & retrieval of records, account maintenance, imaging and the opening of accounts in coordination with the Operations - Core Team. Additionally, the Ops Support Specialist 5 serves as the liaison between operations staff, relationship managers, project managers, custodians and clients. The overall objective of this role is to provide day-to-day operations support in alignment with Citi operations support infrastructure and processes. Responsibilities: Resolve customer inquiries and supervise escalated issues, providing efficient and effective customer service to Citi’s clients Identify opportunities to offer value added products and services while adhering to strict laws and regulation governing Telesales Communicate daily with management on productivity, quality, availability, Management Information System (MIS) indicators, as well as providing written and oral communications to supported business areas for approval of correct financial entries and resolution of incorrect entries Facilitate training based on needs of staff within the department and assist with answering staff questions within Disputes, as needed Support expansive and diverse array of products and services Assist with ongoing Lean and process improvement projects Resolve complex problems based on best practice/precedence, escalating as needed Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: 2-4 years of relevant experience Proficient in Microsoft Office Comprehensive knowledge of Dispute process Ability to work unsupervised and apply problem solve capabilities Ability to work occasional weekends to support Pega releases and COB testing Working knowledge of Pega and/or G36 functionality, Continuity of Business (CoB) testing, and creating and resolving Trust Receipts (TR’s) Demonstrated analytical skills and mathematical knowledge Consistently demonstrates clear and concise written and verbal communication skills Education: 15 or 16 years of full time graduation in any stream, preferably from commerce and arts background This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Operations - Core ------------------------------------------------------ Job Family: Operations Support ------------------------------------------------------ Time Type: ------------------------------------------------------ Citi is an equal opportunity and affirmative action employer. Qualified applicants will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Citigroup Inc. and its subsidiaries ("Citi”) invite all qualified interested applicants to apply for career opportunities. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi . View the " EEO is the Law " poster. View the EEO is the Law Supplement . View the EEO Policy Statement . View the Pay Transparency Posting
Posted 5 days ago
3.0 years
0 Lacs
Indore, Madhya Pradesh, India
Remote
🔥AI Engineer (Generative & Agentic Systems) - WFH THIS IS A FULLY REMOTE WORKING OPPORTUNITY If you are interested and fulfill the below mentioned criteria then pls share following details. 1. EMAIL ID 2. PHONE NUMBER 3. YEARS OF RELEVANT EXPERIENCE. 4. UPDATED RESUME. 5. CCTC/ECTC 6. Notice period Send me the details..don't apply directly. *Key Responsibilities * ● Architect and build prompting-and-agent pipelines for document ingest, semantic extraction, and decision support. ● Prototype and productionize fine-tuned or LoRA-augmented open-source models (e.g.,Mistral, Qwen) alongside API-based LLMs (GPT-4, Claude). ● Implement evaluation benchmarks (accuracy, latency, hallucination rates) and optimize cost/token usage at scale. ● Collaborate with data and backend teams to integrate real-time financial and climate data sources. ● Drive experiments on retrieval-augmented-generation (RAG), chain-of-thought prompting, and proactive user profiling. *Core Skills & Experience * ● 3 + years in LLM development: OpenAI, Anthropic, Google’s Vertex AI. ● Hands-on with fine-tuning pipelines (LoRA, Hugging Face Transformers). ● Experience building agentic systems (e.g., LangChain, custom orchestrators). ● Solid Python expertise; familiarity with Java for integration. ● Strong grasp of evaluation metrics for generative models; experience reducing hallucinations. ● Understanding of financial- and ESG/climate-domain requirements (data sensitivity, explainability). *Nice-to-Have* ● Prior work on multi-language semantic extraction (legal, financial docs). ● Familiarity with compliance-driven AI (weather data, finance). You will be thoroughly tested on these skills If you lack these skills then pls don't apply to save your time. If you are absolutely sure about these skills then send me above details
Posted 5 days ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Associate Manager – AI / Gen AI Data and Analytics team is a multi-disciplinary technology team delivering client projects and solutions across Data Mining & Management, Visualization, Business Analytics, Automation and Statistical Insights and AI/GenAI. The assignments cover a wide range of countries and industry sectors. The opportunity We are looking for a Associate manager –- AI/GenAI, proficient in Artificial Intelligence, Machine Learning, deep learning and LLM models for Generative AI, text analytics and Python Programming; will be responsible for developing and delivering industry sectors (Financial services; Consumer, product & Retail; Healthcare & wellness; Manufacturing; Supply chain; telecom; Real Estate etc.) specific solutions which will be used to implement the EY SaT mergers and acquisition methodologies. Your Key Responsibilities Develop, review, and implement Solutions applying AI, Machine Learning, Deep Learning, and develop APIs using Python. Having relevant understanding of Big Data and Visualization would be one-upping. Lead the development and implementation of Generative AI applications using both open source (Llama, BERT, Dolly etc.) and closed source (Open AI GPT models, MS Azure Cognitive Services, Google’s Palm, Cohere etc.) Large Language Models (LLMS). Extensively work with advanced models such as GPT-3.5, GPT 4, Llama, BERT etc, for natural language processing and creative content generation using contextual information. Design and optimize solutions leveraging Vector databases for efficient storage and retrieval of contextual data for LLMs. Understand Business and Sectors, ability to identify the whitespaces and opportunities for analytics application. Work and manage large to mid-size projects, and ensure smooth service delivery on assigned products, engagements and/or geographies. Work with project managers to study resource needs and gaps and devise alternative ways forward. Provide expert reviews for all projects within the assigned subject. Ability to communicate with cross functional/competencies teams. Go to Market / Stakeholder Management. Skills And Attributes For Success Able to work creatively and systematically in a time-limited, problem-solving environment Loyal and reliable with high ethical standards Flexible, curious and creative, open for new things and able to propose innovative ideas Good interpersonal skills Team player, open, pleasure to work with and positive in a group dynamic Intercultural intelligence and experience of working in more than one country and/or multi-cultural teams with distributed delivery experience Ability to manage multiple priorities simultaneously to meet tight deadlines and drive projects to completion with minimal supervision To qualify for the role, you must have Experience of guiding teams on Projects focusing on AI/Data Science and Communicating results to clients Familiar in implementing solutions in Azure Cloud Framework Excellent Presentation Skills 8 - 10 years of relevant work experience in developing and implementing AI, Machine Learning Models- experience of deployment in Azure is preferred Experience in application of statistical techniques like Linear and Non-Linear Regression/classification/optimization, Forecasting and Text analytics. Familiarity with deep learning and machine learning algorithms and the use of popular AI/ML frameworks Minimum 4 years of experience in working with NLG, LLM, DL Techniques Relevant understanding of Deep Learning and neural network techniques Expertise in implementing applications using open source and proprietary LLM models Proficient in using Langchain-type orchestrators or similar Generative AI workflow management tools Minimum of 5-8 years of programming in Python Experience with the software development life cycle (SDLC) and principles of product development Willingness to mentor team members Solid thoughtfulness, technical and problem-solving skills Excellent written and verbal communication skills Ideally, you’ll also have Ability to think strategically/end-to-end with result-oriented mindset Ability to build rapport within the firm and win the trust of clients Willingness to travel extensively and to work on client sites / practice office locations What We Look For A Team of people with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment An opportunity to be a part of market-prominent, multi-disciplinary team of 3000 + professionals, in the only integrated global transaction business worldwide. Opportunities to work with EY SaT practices globally with prominent businesses across a range of industries What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success, as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 5 days ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Data Scientist Experience range: 3+ years Location: CloudLex Pune Office (In-person, Monday to Friday, 9:30 AM – 6:30 PM) Responsibilities Design and implement AI agent workflows. Develop end-to-end intelligent pipelines and multi-agent systems (e.g., LangGraph/LangChain workflows) that coordinate multiple LLM-powered agents to solve complex tasks. Create graph-based or state-machine architectures for AI agents, chaining prompts and tools as needed. Build and fine-tune generative models. Develop, train, and fine-tune advanced generative models (transformers, diffusion models, VAEs, GANs, etc.) on domain-specific data. Deploy and optimize foundation models (such as GPT, LLaMA, Mistral) in production, adapting them to our use cases through prompt engineering and supervised fine-tuning. Develop data pipelines. Build robust data collection, preprocessing, and synthetic data generation pipelines to feed training and inference workflows. Implement data cleansing, annotation, and augmentation processes to ensure high-quality inputs for model training and evaluation. Implement LLM-based agents and automation. Integrate generative AI agents (e.g., chatbots, AI copilots, content generators) into business processes to automate data processing and decision-making tasks. Use Retrieval-Augmented Generation (RAG) pipelines and external knowledge sources to enhance agent capabilities. Leverage multimodal inputs when applicable. Optimize performance and safety. Continuously evaluate and improve model/system performance. Use GenAI-specific benchmarks and metrics (e.g., BLEU, ROUGE, TruthfulQA) to assess results, and iterate to optimize accuracy, latency, and resource efficiency. Implement safeguards and monitoring to mitigate issues like bias, hallucination, or inappropriate outputs. Collaborate and document. Work closely with product managers, engineers, and other stakeholders to gather requirements and integrate AI solutions into production systems. Document data workflows, model architectures, and experimentation results. Maintain code and tooling (prompt libraries, model registries) to ensure reproducibility and knowledge sharing. Required Skills & Qualifications Education: Bachelor’s or Master’s degree in Computer Science, Data Science, Artificial Intelligence, or a related quantitative field analyticsvidhya.com (or equivalent practical experience). A strong foundation in algorithms, statistics, and software engineering is expected. Programming proficiency: Expert-level skills in Python coursera.org , with hands-on experience in machine learning and deep learning frameworks (PyTorch, TensorFlow) analyticsvidhya.com . Comfortable writing production-quality code and using version control, testing, and code review workflows. Generative model expertise: Demonstrated ability to build, fine-tune, and deploy large-scale generative models analyticsvidhya.com . Familiarity with transformer architectures and generative techniques (LLMs, diffusion models, GANs) analyticsvidhya.comanalyticsvidhya.com . Experience working with model repositories and fine-tuning frameworks (Hugging Face, etc.). LLM and agent frameworks: Strong understanding of LLM-based systems and agent-oriented AI patterns. Experience with frameworks like LangGraph/LangChain or similar multi-agent platforms gyliu513.medium.com . Knowledge of agent communication standards (e.g., MCP/Agent Protocol) gyliu513.medium.comblog.langchain.dev to enable interoperability between AI agents. AI integration and MLOps: Experience integrating AI components with existing systems via APIs and services. Proficiency in retrieval-augmented generation (RAG) setups, vector databases, and prompt engineering analyticsvidhya.com . Familiarity with machine learning deployment and MLOps tools (Docker, Kubernetes, MLflow, KServe, etc.) for managing end-to-end automation and scalable workflows analyticsvidhya.com . Familiarity with GenAI tools: Hands-on experience with state-of-the-art GenAI models and APIs (OpenAI GPT, Anthropic, Claude, etc.) and with popular libraries (Hugging Face Transformers, LangChain, etc.). Awareness of the current GenAI tooling ecosystem and best practices. Soft skills: Excellent problem-solving and analytical abilities. Strong communication and teamwork skills to collaborate across data, engineering, and business teams. Attention to detail and a quality-oriented mindset. (See Ideal Candidate below for more on personal attributes.) Ideal Candidate Innovative, problem-solver: You are a creative thinker who enjoys tackling open-ended challenges. You have a solutions-oriented mindset and proactively experiment with new ideas and techniques analyticsvidhya.com . Systems thinker: You understand how different components (data, models, services) fit together in a large system. You can architect end-to-end AI solutions with attention to reliability, scalability, and integration points. Collaborative communicator: You work effectively in multidisciplinary teams. You are able to explain complex technical concepts to non-technical stakeholders and incorporate feedback. You value knowledge sharing and mentorship. Adaptable learner: The generative AI landscape evolves rapidly. You are passionate about staying current with the latest research and tools. You embrace continuous learning and are eager to upskill and try new libraries or platforms analyticsvidhya.com . Ethical and conscientious: You care about the real-world impact of AI systems. You take responsibility for the quality and fairness of models, and proactively address concerns like data privacy, bias, and security.
Posted 5 days ago
3.0 - 5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Position Summary This position is for Machine Shipment activity, which includes Machine closing in E1, FOC kitting & Machine Packing Procedures, and needs to generate shortage reports and maintain inventory level. Work You’ll Do Prepare the Part list related to the Release/run-off machines Physical Retrieval of FOC Spares Raising the shortage of parts for ready-for-shipment machines. Making sure the type of truck is suitable for the Machine Model [As per predefined norms] Inventory preparation related to Shipment Cell Parts preservation to be carried out in Shipment Cell Locations Physical Movements of Spare Parts from General Location Give an Alarm for Shortage of Parts Give an Alarm for Auxiliary requirement Parts Check specification as per OSS, machine packing, & Shipment Team This role will be a part of the Shipment team Basic Qualifications B.Com with 3-5 Years of shipment-related experience. Preferred Qualifications Knowledge of computers will be preferred. Who We Are Milacron is a global leader in the manufacture, distribution and service of highly engineered and customized systems within the $27 billion plastic technology and processing industry. We are the only global company with a full-line product portfolio that includes hot runner systems, injection molding, extrusion equipment. We maintain strong market positions across these products, as well as leading positions in process control systems, mold bases and components, maintenance, repair and operating (“MRO”) supplies for plastic processing equipment. Our strategy is to deliver highly customized equipment, components and service to our customers throughout the lifecycle of their plastic processing technology systems. EEO: The policy of Milacron is to extend opportunities to qualified applicants and employees on an equal basis regardless of an individual's age, race, color, sex, religion, national origin, disability, sexual orientation, gender identity/expression or veteran status. We are committed to being an Equal Employment Opportunity (EEO) Employer and offer opportunities to all job seekers including individuals with disabilities. If you need a reasonable accommodation to assist with your job search or application for employment, email us at recruitingaccommodations@milacron.com. In your email, please include a description of the specific accommodation you are requesting as well as the job title and requisition number of the position for which you are applying.
Posted 5 days ago
5.0 - 10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! The challenge Search, Discovery, and Content AI (SDC) is a cornerstone of Adobe’s ecosystem, enabling creative professionals and everyday users to access, discover, and demonstrate a wide array of digital assets and creative content, including images, videos, documents, vector graphics and more. With increasing demand for intuitive search, contextual discovery, and seamless content interactions across Adobe products like Express, Lightroom, Adobe Stock, SDC is evolving into a generative AI powerhouse. This team develops innovative solutions for intent understanding, personalized recommendations, and action orchestration to transform how users interact with content. Working with extensive datasets and pioneering technologies, you will help redefine the discovery experience and drive user success. The Opportunity How can you participate? We’re looking for a top-notch search engineering leadership in the area of information retrieval, search indexing, Elasticsearch, Lucene, algorithms, relevance & ranking, data mining, machine learning, data analysis & metrics, query processing, multi-lingual search, and search UX. This is an opportunity to make a huge impact in a fast-paced, startup-like environment in a great company. Join us! Responsibilities Work on Big data, data ingestion, search indexing, Hadoop, distributed systems, deep learning, recommendations, and performance by developing a Search platform at Adobe that would power Adobe product lines such as Express, Creative Cloud, Acrobat, Marketing Cloud, Stock. Apply machine learning to improve the ranking and recommendations as part of search work-flow. Build platform to index billions of images, documents and other assets in real-time. Maintain and optimize search engine, identify new ideas to evolve it, develop new features and benchmark possible solutions, in terms of search relevance, recommendations but also user experience, performance and feasibility. Build these products using technologies such as Elastic Search, REST web services, SQS/Kafka, Machine Learning, and more. What You Need To Succeed B. Tech or M. Tech in Computer Science Minimum 5-10 years of relevant experience in industry Experience in engineering SaaS based software development Hands on experience with Java and Python. Hands-on experience in Big data processing, Hadoop and Spark. Experience in Web services and REST. Experience in RDBMS & NOSQL database. Experience in AWS resources. Experience with Elastic Search/Solr. Experience with search engine technology, and inverted indexes Hands-on experience in building indexing pipelines Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more about our vision here. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015.
Posted 5 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Matillion is The Data Productivity Cloud. We are on a mission to power the data productivity of our customers and the world, by helping teams get data business ready, faster. Our technology allows customers to load, transform, sync and orchestrate their data. We are looking for passionate, high-integrity individuals to help us scale up our growing business. Together, we can make a dent in the universe bigger than ourselves. With offices in the UK, US and Spain, we are now thrilled to announce the opening of our new office in Hyderabad, India. This marks an exciting milestone in our global expansion, and we are now looking for talented professionals to join us as part of our founding team. About the Role At Matillion, our engineering culture is shaped around small, cross-functional development teams empowered to own specific product themes and initiatives. Each team brings together engineers of all levels, working collaboratively to build, test, and ship impactful features that help our customers solve real-world data challenges. As a Staff Software Engineer , you will play a key role in driving the technical vision and hands-on development of our platform. You’ll bring deep engineering expertise and a passion for solving complex problems to help us build scalable, performant, and secure software — all while staying at the forefront of cutting-edge technologies, including Generative AI. What You’ll Be Doing Lead by example : Contribute hands-on to software development, championing high-quality code and robust architecture Drive technical direction : Shape the design and evolution of systems, ensuring performance, security, and scalability Collaborate cross-functionally : Work with engineers, product managers, and customer-facing teams to break down large initiatives into actionable plans Mentor and support : Provide guidance to team members through code reviews, knowledge sharing, and pairing Innovate with AI : Develop intelligent features powered by LLMs (Large Language Models), integrating modern GenAI capabilities into our platform Continuously improve : Proactively explore new technologies, contribute to internal best practices, and help evolve our engineering culture What We’re Looking For Strong hands-on experience with Java and React , grounded in solid object-oriented principles and software engineering best practices Proficiency in building Java Spring microservices , containerisation with Docker , and experience with relational databases like Postgres, MySQL, or SQL Server Proven track record across the full SDLC , including CI/CD and Agile methodologies Deep familiarity with cloud platforms , especially AWS Ability to adapt to new technologies and solve both product and platform-level challenges in a collaborative team environment GenAI Expertise (Essential) Hands-on experience integrating with LLM APIs such as OpenAI, Anthropic, or Hugging Face Understanding of prompt engineering — crafting and tuning prompts to optimize LLM responses Experience with RAG (Retrieval Augmented Generation) to enrich LLM responses with contextual data Familiarity with key LLM concepts: System/User prompts, tokens, embeddings, context windows, temperature, top-p, stop sequences GenAI Experience (Nice to Have) Experience with fine-tuning models and prompt compression techniques Exposure to GenAI architectures like prompt chaining , agentic workflows , or routing mechanisms Familiarity with Spring AI or similar libraries that abstract LLM interactions Experience designing and using evals to compare LLM performance and optimize prompts Matillion has fostered a culture that is collaborative, fast-paced, ambitious, and transparent, and an environment where people genuinely care about their colleagues and communities. Our 6 core values guide how we work together and with our customers and partners. We operate a truly flexible and hybrid working culture that promotes work-life balance, and are proud to be able to offer the following benefits: - Company Equity - 27 days paid time off - 12 days of Company Holiday - 5 days paid volunteering leave - Group Mediclaim (GMC) - Enhanced parental leave policies - MacBook Pro - Access to various tools to aid your career development More about Matillion Thousands of enterprises including Cisco, DocuSign, Slack, and TUI trust Matillion technology to load, transform, sync, and orchestrate their data for a wide range of use cases from insights and operational analytics, to data science, machine learning, and AI. With over $300M raised from top Silicon Valley investors, we are on a mission to power the data productivity of our customers and the world. We are passionate about doing things in a smart, considerate way. We’re honoured to be named a great place to work for several years running by multiple industry research firms. We are dual headquartered in Manchester, UK and Denver, Colorado. We are keen to hear from prospective Matillioners, so even if you don’t feel you match all the criteria please apply and a member of our Talent Acquisition team will be in touch. Alternatively, if you are interested in Matillion but don't see a suitable role, please email talent@matillion.com. Matillion is an equal opportunity employer. We celebrate diversity and we are committed to creating an inclusive environment for all of our team. Matillion prohibits discrimination and harassment of any type. Matillion does not discriminate on the basis of race, colour, religion, age, sex, national origin, disability status, genetics, sexual orientation, gender identity or expression, or any other characteristic protected by law.
Posted 5 days ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Description Amazon’s Selection Monitoring team is responsible for making the biggest catalog on the planet even bigger. Our systems process billions of products to algorithmically find products not already sold on Amazon and programmatically add them to the Amazon catalog. We apply parallel processing, machine learning and deep learning algorithms to evaluate products and brands in order to identify and prioritize products and brands to be added to Amazon’s catalog. The datasets produced by our team are used by teams across Amazon to improve: product information, search and discoverability, pricing, and delivery experience. Our work involves building state-of-the-art Information Retrieval (IR) systems to mine the web and automatically create structured entities from un-structure/semi-structured data. We constantly stretch the boundaries of large scale distributed systems, Elastic Computing, Big Data, and SOA technologies to tackle challenges at Amazon’s global scale. Come join us in our journey to make everything – and yes, we do mean *everything* – that anyone wants to buy, available on Amazon! We are looking for SDEs with strong technical knowledge, established background in engineering large scale software systems, and passion for solving challenging problems. The role demands a high-performing and flexible candidate who can take responsibility for success of the system and drive solutions from design to coding, testing, and deployment, to achieve results in a fast paced environment. Key job responsibilities Work with Sr.SDEs and Principal Engineers to drive the technical and architectural vision of SM systems responsible for generation of structured domain entities from structured/semi-structured data. Develop systems and extensible frameworks for complete lifecycle management of domain entities and inter-entity relationships Build scalable platform capabilities for data processing, meta data generation and guardrails . Solve complex problems in automated identity generation, web-to-Amazon namespace translation, and classification of products. Design and develop solutions for efficient storage and vending/search of products and related information. Utilize serverless and big data technologies to develop efficient algorithms that operate of large datasets. Lead and mentor junior engineers, and drive best practices around design, coding, testability, and security. Basic Qualifications 3+ years of non-internship professional software development experience 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience Experience programming with at least one software programming language Bachelor's Degree in Computer Science, advanced degrees preferred. Experience building complex software systems that have been successfully delivered to customer. Deep technical expertise and hands-on architectural understanding of distributed and service-oriented architectures. Has delivered large-scale enterprise software systems or large scale online services. Solid programming skills in OO languages (Java/Scala/C++/Python etc) and a deep understanding of object oriented design. Advanced knowledge of data structures and at ease in optimizing algorithms. Preferred Qualifications 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience Bachelor's degree in computer science or equivalent Master's degree in computer science or equivalent A deep understanding of software development life cycle and a good track record of shipping software on time Experience in data mining, machine learning algorithms, rules engines, and workflow systems. Deep understanding of SOA with proven ability in building highly scalable and fault tolerant systems using Cloud computing technologies. Deep understanding of Map Reduce paradigm with experience in building solutions using Big Data technologies like Spark, Hive etc . Experience in developing efficient algorithms that operate on large datasets. Exposure to AWS technologies is a big plus. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka Job ID: A3016511
Posted 5 days ago
3.0 years
0 Lacs
Kanpur, Uttar Pradesh, India
On-site
T Role: Senior Full Stack Developer Skills Preferred: React/ Python Django Location: Kanpur, Goa Experience: 3+ Years No of Positions: 1 Job Type: Full-time Start Date: ASAP Position Overview: We are seeking a Senior Full Stack Developer to take a central role in shaping, refining, and elevating our web applications. Leveraging your expertise in React, Angular, and Node.js, you will contribute significantly to crafting top-tier, user-focused solutions that align seamlessly with our technical and business objectives. You’ll collaborate with cross-functional teams, lead discussions on technical matters, and champion the implementation of development best practices. Responsibilities: Full Stack Development: Lead the end-to-end development lifecycle of web applications, ensuring seamless integration between front-end and back-end components. Develop efficient and maintainable code for both client and server sides. Technical Leadership: Provide guidance to the team and participate in architectural discussions, conduct code reviews, and contribute to technical decision-making. Front-End Expertise: Utilize your proficiency in React or Angular to design and implement responsive user interfaces. Collaborate closely with UI/UX designers to create engaging and intuitive user experiences. Back-End Development: Develop RESTful APIs using Node.js and related technologies, ensuring optimal data retrieval and manipulation while maintaining data integrity. Database Management: Design and optimize databases, write complex queries, and implement data models that align with application requirements. Performance and Security: Identify performance bottlenecks and security vulnerabilities within applications. Implement necessary optimizations and security measures for optimal performance and data protection. Collaboration: Work closely with product managers, UI/UX designers, and other stakeholders to gather requirements, offer technical insights, and ensure successful project completion. Problem Solving: Address intricate technical challenges with innovative solutions. Debug and troubleshoot issues as they arise, facilitating prompt resolutions. Requirements and skills: A Bachelor’s degree in Computer Science, Engineering, or a related field. Possession of a Master’s degree is advantageous. A minimum of 5 years of professional experience as a Full Stack Developer, with a strong command of React and Node.js being essential, and familiarity with Angular being beneficial. Proficiency in frontend technologies such as React. Robust expertise in either React or Angular and associated tools/libraries, enabling the creation of engaging interactive user interfaces. Thorough understanding of Python Django and server-side JavaScript development. Familiarity with frameworks like Express.js offers an additional advantage. Demonstrated aptitude in designing and utilizing RESTful APIs, including a solid grasp of API design principles. Familiarity with diverse database systems (e.g., MySQL, PostgreSQL, MongoDB) and hands-on experience in database design and optimization. Proficiency in version control systems (Git) and familiarity with agile development methodologies. Outstanding problem-solving abilities, equipped to diagnose and resolve intricate issues within distributed systems. Effective written and verbal communication skills, fostering productive collaboration across teams. Previous involvement in mentoring or leading junior developers will be a valuable asset. A portfolio showcasing prior projects or work samples is highly desirable. We offer a competitive salary, performance-based incentives, and a supportive work environment that encourages professional growth and development. If you are a motivated and results-oriented individual with a passion for IT, we want to hear from you! If this all sounds exciting, please apply using “ Apply Now ” or send your application to jobs@barytech.com asap. Thanks for your interest. We look forward to getting to know you.
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane