Jobs
Interviews

44412 Gcp Jobs - Page 48

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

hyderābād

On-site

Date: Aug 21, 2025 Location: Hyderabad, IN Company: Syniti ABOUT US Syniti is the enterprise data partner, empowering anyone who relies on data to make business-critical decisions by delivering data they can trust through a unique combination of intelligent software and experts who deeply understand the role of data in enterprise growth. Trusted by the Fortune 2000, Syniti helps leading businesses reconfigure the role of enterprise data from afterthought to foundational first step; enabling them to unlock valuable insights that ignite growth, reduce risk, and expand their competitive advantage. Syniti’s Data First philosophy and enterprise data management platform supports data migration, data quality, data replication, data matching, master data management, analytics, data governance, and data strategy in a single, unified solution. As an innovative, global leader in Enterprise Data Management, the combination of our award-winning software platform and premier consultants creates a unique advantage for leading enterprises. Syniti is also a preferred data solution used by the world’s top system integrators. Headquartered in Boston, Massachusetts with offices in 25 countries around the world, Syniti operates in all global regions and industry verticals, and maintains a 100% client success rate across thousands of complex data projects and initiatives. Overview: Syniti is at the forefront of SaaS and cloud-based data solutions, known for pioneering new standards in the rapidly evolving technology landscape. A trusted partner in enterprise data management, we drive excellence by merging our expertise with the needs of Fortune 2000 companies. We pride ourselves on consistency, innovation, and delivering exceptional results. Joining Syniti means diving into a world where technical acumen meets unparalleled innovation. More than just possessing skills in a niche domain, you’ll thrive in an agile environment that values the spirit of collaboration and open exchange of ideas. Here, your contributions don’t just enhance our product suite; they empower our Fortune 2000 collaborators to steer their digital transformation voyages successfully. And as we venture deeper into the realm of digital solutions, your role will span from mentoring peers, championing groundbreaking ideas, to driving pivotal strategic decisions. If you’re driven by innovation, collaboration, and the desire to create impactful technology solutions, Syniti could be the perfect next chapter in your career. The ROLE: As a QA Automation Engineer, you will be the guardian of product quality – designing, building, and evolving the automated test frameworks that keep our platform bullet-proof. Drawing on your software-engineering mindset, you will craft reliable end-to-end, API, data validation, and performance tests. You will integrate them seamlessly into our CI/CD pipelines, and partner closely with developers, product owners, and DevOps to deliver features that “just work” at scale. Your goal will be to detect issues early, provide fast feedback, and champion a culture where quality is everyone’s responsibility. WHAT YOU WILL DO: Architect and maintain automation frameworks for web UI, micro-services, and data pipelines Embed quality gates in CI/CD, primarily via GitHub Actions Design comprehensive test strategies based on risk and user workflows Author clean, reusable test code Collaborate across disciplines to refine acceptance criteria, reproduce customer issues, and uphold our Definition of Done. WHAT IT TAKES: 3+ years of hands-on automation experience in a modern software-engineering environment (cloud/SaaS preferred) Proficiency in at least one object-oriented or scripting language and familiarity with BDD/TDD practices Strong knowledge of test frameworks and tools Experience testing RESTful and event-driven APIs, plus basic SQL skills for data validation Solid grasp of CI/CD pipelines, containerization (Docker), and infrastructure-as-code concepts Understanding of security, performance, and accessibility testing fundamentals Ability to write clear defect reports, test plans, and risk analysis – and to communicate findings effectively Familiarity with cloud platforms (AWS/Azure/GCP) and observability stacks WHAT WE OFFER Trust that you are good at what you’re doing. At Syniti you will find a supportive environment and access to learning tools, but micromanagement is not our thing. Growth . We are growing rapidly and steadily solving the biggest challenges enterprise companies are faced with today. There was never a better time to join and grow with us. Most importantly you will have the chance to shape our journey and share in our success story. Support . We all rely on each other and enable each other to be successful. You won’t stand alone. Curiosity and genuine interest in you . We all have our different stories, all equally fascinating with each depicting a different journey and we want to hear them all. Recognition . We are the sum of individual achievements and we always take the time to celebrate them. An open organisation . Hierarchies are not our thing and access is something we make sure of across the board. We are a family where everyone is just as important, everyone’s work is seen and ideas valued. Our Commitment to Inclusion At Syniti, we’re committed to creating a respectful, inclusive, and fair workplace where everyone belongs and thrives. We believe that diverse perspectives make us stronger — and we value the unique backgrounds, experiences, and voices each person brings to our team. We welcome applicants based on their skills and potential, and we’re dedicated to ensuring equal opportunities for all, regardless of personal background. If you need accommodations during the hiring process, please let us know — we’re here to support you.

Posted 3 days ago

Apply

0 years

4 - 8 Lacs

hyderābād

On-site

About the job The Senior Clinical Data Coordinator (CDC) is responsible for routine data management activities during the course of a study complying with GCP and applicable regulatory guidance to ensure the generation of accurate, complete and consistent clinical databases. Our Hubs are a crucial part of how we innovate, improving performance across every Sanofi department and providing a springboard for the amazing work we do. Build a career and you can be part of transforming our business while helping to change millions of lives. Ready? As Senior Clinical Data Coordinator within our Clinical Data Management, you’ll support the Study Data Manager in conducting study data management activities, ensuring his/her activities are completed per agreed timelines. Candidate is responsible for the quality of its own deliverables. We are an innovative global healthcare company with one purpose: to chase the miracles of science to improve people’s lives. We’re also a company where you can flourish and grow your career, with countless opportunities to explore, make connections with people, and stretch the limits of what you thought was possible. Ready to get started? Major Responsibilities : Ensure data quality by conducting data management activities including data validation, data review, etc. following study timelines. Maintain clear reporting on DM activities in alignment with study teams and management needs. Monitor the progress of data cleaning activities and generate status reports for the Study Data Manager and study team. Participate in the writing of study plans including Data Management Plan, Centralized Monitoring Plan etc. as per timelines defined with the study team. Participate in the writing of UAT Plans and perform testing for database, listings, patient profile and safety notification tool, providing feedback to programming team and Study Data Manager to collaboratively solve issues found both during initial database set up and database revision. Conduct centralized monitoring activities according to Centralized Monitoring Plan. Ensure clear, concise, consistent communication on data management activities at study level (including risks identification, monitoring, alert and escalation) Acts as mentor for new CDC. Identifies opportunities to streamline processes and increase data quality. Provides input to new approaches and initiatives within data management activities, with a high level of team spirit and motivation. Support and act as back-up of the Study Data Manager, when requested. About you Experience: Experience in Clinical Data Management. Soft skills: Excellent accuracy and attentiveness to detail Excellent written and oral communication Good team player and ability to foster a good collaboration within CDM and with clinical study team Technical skills: Strong experience with CDM and Knowledge of regulatory guidelines in relation to data quality and clinical trials conduct. Knowledge of database technologies and ability to acquire and apply new technical skills. Proficiency in Microsoft Office Suite (intermediate level) Education: Bachelor’s degree or above, preferably in a life science or drug development related field. Languages: Good English skills (both verbal and written). Why choose us? Bring the miracles of science to life alongside a supportive, future-focused team. Discover endless opportunities to grow your talent and drive your career, whether it’s through a promotion or lateral move, at home or internationally. Enjoy a thoughtful, well-crafted rewards package that recognizes your contribution and amplifies your impact. Join an international innovative biopharma company. Participate in the evolution of Clinical Data Management and deployment of innovations. “Sanofi is at the forefront of the Clinical Data Management Modernization. Our ambitions are significant but pragmatic. The speed of change unprecedented but achievable. Join us of you want to help us transform our vision into a reality!” null

Posted 3 days ago

Apply

0 years

0 Lacs

hyderābād

On-site

Job requisition ID :: 81449 Date: Aug 22, 2025 Location: Hyderabad Designation: Senior Consultant Entity: Deloitte India LLP Job Summary: We are looking for a talented and motivated OpenShift L2 Support Engineer to provide advanced-level support for OpenShift-based containerized platforms. The role involves troubleshooting, maintenance, and ensuring the smooth operation of OpenShift clusters in collaboration with L1, L3, and DevOps teams. Key Responsibilities: Support and Maintenance: Manage and troubleshoot OpenShift clusters in production and non-production environments. Monitor and ensure cluster availability, performance, and scalability. Analyze and resolve L2-level issues escalated from the L1 support team. Incident and Problem Management: Work on tickets related to OpenShift platform incidents and service requests. Perform root cause analysis (RCA) for recurring issues and implement preventive measures. Collaborate with L3 and engineering teams for complex escalations. Configuration Management: Implement and manage OpenShift configuration changes, including node scaling, network policies, and application deployments. Patch and upgrade OpenShift components following best practices. Monitoring and Alerting: Set up and maintain monitoring and logging tools like Prometheus, Grafana, and EFK/ELK stack. Respond to alerts related to cluster health, application logs, or performance metrics. Documentation: Maintain detailed operational and troubleshooting documentation. Create knowledge articles and SOPs for the L1 team. Collaboration: Work closely with DevOps, development, and infrastructure teams to resolve issues and implement changes. Provide inputs for platform optimization and reliability improvements. Required Skills: Strong hands-on experience with Red Hat OpenShift on v4.x In-depth knowledge of Kubernetes architecture, concepts, and troubleshooting. Proficiency in container technologies such as Docker. Experience with CI/CD tools like Jenkins, ArgoCD, or GitOps workflows. Hands-on knowledge of Linux/Unix systems and shell scripting. Familiarity with networking concepts: DNS, Load Balancers, Firewalls, and Ingress/Egress policies. Experience with monitoring/logging tools (Prometheus, Grafana, ELK, etc.). Basic knowledge of cloud platforms (AWS, Azure, GCP) hosting OpenShift. Preferred Skills: Red Hat OpenShift certifications (e.g., EX280 or EX288 or CKA). Knowledge of storage integration and CSI Familiarity with Istio, Service Mesh, or related technologies. Scripting or automation skills (Python, Ansible, etc.). Experience with infrastructure-as-code tools like Terraform or Ansible.

Posted 3 days ago

Apply

8.0 - 18.0 years

0 Lacs

hyderābād

On-site

The people here at Apple don’t just build products - they build the kind of wonder that’s revolutionized entire industries. It’s the diversity of those people and their ideas that inspires the innovation that runs through everything we do, from amazing technology to industry-leading environmental efforts. Join Apple, and help us leave the world better than we found it. Imagine what you could do here. Are you passionate about handling large & complex data problems, want to make an impact and have the desire to work on groundbreaking big data technologies? Then we are looking for you. At Apple, great ideas have a way of becoming great products, services, and customer experiences very quickly. Bring passion and dedication to your job and there's no telling what you could accomplish. Would you like to work in a fast-paced environment where your technical abilities will be challenged on a day-to-day basis? If so, Apple's Global Business Intelligence team is looking for passionate, meticulous, technical savvy, energetic engineer who likes to think creatively. Apple's Enterprise Data warehouse team deals with Petabytes of data catering to a wide variety of real- time, near real-time and batch analytical solutions. These solutions are integral part of business functions like Retail, Sales, Operations, Finance, AppleCare, Marketing and Internet Services, enabling business drivers to make critical decisions. We use a diverse technology stack such as Snowflake, Spark, HANA, SingleStore, Kafka, Iceberg, Cassandra and beyond. Designing, developing and scaling these big data solutions are a core part of our daily job. Description - As a Cloud Data Engineer you will design, develop and implement modern cloud based data warehouse/ data lakes and influence overall data strategy for the organization. - Translate complex business requirements into scalable technical solutions meeting data warehousing/analytics design standards. - Strong understanding of analytics needs and proactive-ness to build solutions to improve the efficiency along with that help implement leading data practices & standards. - Collaborate with multiple multi-functional teams and work on solutions which has larger impact on Apple business. - Ability to communicate effectively, both written and verbal, with technical and non- technical multi-functional teams. - You will get along with many other internal/external teams to deliver elite products in an exciting rapidly changing environment. - Ability to manage partner communication & project risks. - Thrives in a multifaceted environment, maintaining composure and a positive attitude. Minimum Qualifications 8 - 18 years of Hands on Experience in developing and building data pipelines on Cloud & Hybrid infrastructure for analytical needs Experience working with cloud based data warehouse solutions - Snowflake, SingleStore etc., along with expertise in SQL and Advance SQL. Experience in designing and building dimensional data models to improve accessibility, efficiency and quality of data Bachelor’s Degree or equivalent in data engineering, computer science or similar field. Preferred Qualifications High expertise in modern cloud warehouse, data lakes and implementation experience on any of the cloud platforms like AWS/GCP/Azure - preferably AWS. Experience working with data at scale (peta bytes) with big data tech stack and advanced programming languages e:g Python, Scala. Database development experience with Relational or MPP/distributed systems such as Snowflake, SingleStore Excellent problem solving, critical thinking with ability to evaluate and apply new technologies in a short time Strong written and oral communication skills Experience in working with global collaborators with ability to influence decision making. Dedicated, highly motivated with learnability skills. Exposure working in Data Science projects will be a plus Submit CV

Posted 3 days ago

Apply

6.0 years

0 Lacs

hyderabad, telangana, india

Remote

About Us Optiply is at the forefront of three rapidly expanding sectors: Software as a Service, Artificial Intelligence, and E-commerce. With our intelligent purchasing software, we empower over 300 web shops and wholesalers to make smarter buying decisions, using predictive analytics to optimize inventory management. Job Description As our Forecasting Analyst (R Specialist) at Optiply, you will be the go-to expert on our data team, responsible for the statistical and machine learning models that power our platform. You will be at the heart of our mission to help e-commerce businesses thrive by owning the development, maintenance, and improvement of our forecasting algorithms. You’ll work closely with data scientists, backend developers, and product teams to ensure our models are robust, accurate, and seamlessly integrated. This Is What You'll Be Doing Own the end-to-end lifecycle of our forecasting models—from design and development to validation and maintenance—using your deep expertise in R. Proactively optimize and refactor existing R code for enhanced performance, scalability, and accuracy. Collaborate with backend developers to integrate your R models into our production systems, helping to package them into APIs or microservices. Work with our Customer Success and Product teams to translate business requirements and customer needs into robust technical forecasting solutions. Support data processing and ETL pipelines to ensure high-quality data inputs for your models. Uphold high standards of code quality, rigorous testing, and clear documentation for all your work. Stay current with the latest research and techniques in time series forecasting and statistical modeling. This Is Who We’re Looking For You have 3–6 years of professional experience in a role focused on statistical modeling, quantitative analysis, or time series forecasting. Expert-level proficiency in R for statistical analysis, data manipulation (e.g., dplyr, data.table), and forecasting (e.g., forecast, fable, tidymodels). Demonstrated, hands-on experience building and deploying forecasting models and a strong theoretical understanding of time series analysis or inventory optimization principles. Good working knowledge of Python, primarily for scripting, data handling, or collaborating with engineering teams. Comfortable working with data from various sources (SQL, APIs, flat files). Familiarity with DevOps tools and best practices (Docker, Git, CI/CD pipelines) is a strong plus. Experience working in a production environment and collaborating across teams. You are self-driven, proactive, and comfortable working in a fast-paced, international environment. Nice to Have Exposure to cloud platforms (AWS, GCP, or Azure). Prior experience in a SaaS, e-commerce, or supply chain tech company. This It What We Offer Competitive Compensation Package: Reflects your skills and contributions. Holistic Work-Life Harmony: We value your personal time and promote a healthy work-life balance. Comprehensive Health Coverage: Robust insurance plans for your peace of mind. Investment in Professional Growth: We invest in your development with paid training programs. Adaptable Work Hours: We offer flexibility in your work schedule. Hybrid Work Model: Enjoy a blend of remote and in-office work. Strategic Career Development: We provide personalized growth plans and advancement opportunities. Tailored Workspace Setup: Get a high-quality PC, monitor, keyboard, and other essentials. Social Fridays: Wind down the week with casual drinks and foster team camaraderie. This Job Description made your day? Then send us your CV in English and get prepared to meet our team!

Posted 3 days ago

Apply

0 years

0 Lacs

chennai, tamil nadu, india

On-site

Job Description Join our team focused on Google Cloud Data Messaging Services, leveraging technologies like Pub/Sub and Kafka to build scalable, decoupled, and resilient cloud-native applications. This position involves close collaboration with development teams, as well as product vendors, to implement and support the suite of Data Messaging Services offered within GCP and Confluent Kafka. GCP Data Messaging Services provide powerful capabilities for handling streaming data and asynchronous communication. Key benefits include: Enabling real-time data processing and event-driven architectures Decoupling applications for improved resilience and scalability Leveraging managed services like Cloud Pub/Sub and integrating with Kafka environments (Apache Kafka, Confluent Cloud) Providing highly scalable and available infrastructure for data streams Enhancing automation for messaging setup and management Supporting Infrastructure as Code practices for messaging components The Data Messaging Services Specialist plays a crucial role as the corporation migrates and onboards applications that rely on robust data streaming and asynchronous communication onto GCP Pub/Sub and Confluent Kafka. This position requires staying abreast of the continual evolution of cloud data technologies and understanding how GCP messaging services like Pub/Sub, alongside Kafka, integrate with other native services like Cloud Run, Dataflow, etc., within the new Ford Standard app hosting environment to meet customer needs. This is an exciting opportunity to work on highly visible data streaming technologies that are becoming industry standards for real-time data processing. Responsibilities Develop a solid understanding of Google Cloud Pub/Sub and Kafka (Apache Kafka and/or Confluent Cloud). Gain experience in using Git/GitHub and CI/CD pipelines for deploying messaging-related cluster and infrastructure. Collaborate with Business IT and business owners to prioritize improvement efforts related to data messaging patterns and infrastructure. Work with team members to establish best practices for designing, implementing, and operating scalable and reliable data messaging solutions. Identify opportunities for adopting new data streaming technologies and patterns to solve existing needs and anticipate future challenges. Create and maintain Terraform modules and documentation for provisioning and managing Pub/Sub topics/subscriptions, Kafka clusters, and related networking configurations, often with a paired partner. Develop automated processes to simplify the experience for application teams adopting Pub/Sub and Kafka client libraries and deployment patterns. Improve continuous integration tooling by automating manual processes within the delivery pipeline for messaging applications and enhancing quality gates based on past learnings. Qualifications Highly motivated individual with strong technical skills and an understanding of emerging data streaming technologies (including Google Pub/Sub, Kafka, Tekton, and Terraform). Experience with Apache Kafka or Confluent Cloud Kafka, including concepts like brokers, topics, partitions, producers, consumers, and consumer groups. Working experience in CI/CD pipelines, including building continuous integration and deployment pipelines using Tekton or similar technologies for applications interacting with Pub/Sub or Kafka. Understanding of GitOps and other DevOps processes and principles as applied to managing messaging infrastructure and application deployments. Understanding of Google Identity and Access Management (IAM) concepts and various authentication/authorization options for securing access to Pub/Sub and Kafka. Knowledge of any programming language (e.g., Java, Python, Go) commonly used for developing messaging producers/consumers. Experience with public cloud platforms (preferably GCP), with a focus on data messaging services. Understanding of agile methodologies and concepts, or experience working in an agile environment.

Posted 3 days ago

Apply

0 years

0 Lacs

chennai, tamil nadu, india

On-site

Job Description We are seeking a highly analytical and technically proficient Digital Analytics Specialist to join our Analytics team. This role is crucial for understanding media and Website business needs and translating them into robust analytical solutions. The ideal candidate will possess a strong blend of analytical acumen, technical expertise in data manipulation and visualization, and a deep understanding of digital marketing principles. You will be instrumental in building data products that range from insightful dashboards to complex analytical models and efficient data pipelines, all aimed at improving process efficiency and driving business growth. Responsibilities Business Understanding & Strategy: Collaborate closely with Digital Marketing business stakeholders to understand their analytical needs, key performance indicators (KPIs), and strategic objectives. Translate complex business questions into clear, actionable analytical requirements. Data Analysis & Insights: Conduct in-depth analysis of media and Digital performance data, identifying trends, patterns, and opportunities for optimization. Provide actionable insights and recommendations to improve media effectiveness and business outcomes. Dashboard & Report Development: Design, develop, and maintain interactive dashboards and reports using business intelligence (BI) tools (e.g., Power BI, Qlik Sense) to visualize key metrics and performance trends for various stakeholders. Analytical Modeling: Develop and implement analytical models (e.g., forecasting, attribution, segmentation) to address specific business challenges and provide deeper insights into media and Website performance. Data Pipeline Development: Build, maintain, and optimize data pipelines to ensure timely, accurate, and reliable data flow from various data sources into our analytical platforms, utilizing programming languages like Python and SQL. Process Improvement: Identify opportunities to enhance data collection, processing, and analysis workflows. Proactively suggest and implement improvements to increase efficiency and accuracy within the analytics processes. Collaboration & Communication: Work cross-functionally with data engineers, marketing teams, and other business units to ensure data consistency and deliver integrated analytical solutions. Effectively communicate complex analytical findings to non-technical audiences. Qualifications Analytical Prowess: Demonstrated strong analytical, problem-solving, and critical thinking skills with the ability to interpret complex data and draw meaningful conclusions. Digital Marketing Expertise: Solid understanding of digital marketing concepts, channels (e.g., paid search, social, display, SEO), and metrics (e.g., impressions, clicks, conversions, CPA, customer Journey, Optimisation). Business Intelligence Tools: Proven proficiency in designing and developing dashboards and reports using leading BI tools such as Microsoft Power BI and Qlik Sense. Programming Languages: Strong command of Python for data manipulation, analysis, and scripting, along with expert-level proficiency in SQL for querying and managing large datasets. Cloud Environment: Familiarity and hands-on experience working within the Google Cloud Platform (GCP) environment (e.g., BigQuery, Cloud Storage, Cloud Functions). Data Storytelling: Ability to present complex data and insights in a clear, concise, and compelling manner to both technical and non-technical audiences.

Posted 3 days ago

Apply

6.0 years

0 Lacs

thiruvananthapuram

On-site

Job Description – Senior Full-Stack Developer Position: Senior Full-Stack Developer Experience: 6+ years Location: Trivandrum Employment Type: Full-time About the Role We are seeking a highly skilled Senior Full-Stack Developer with proven expertise in building scalable, production-grade web applications. The ideal candidate will be adept at architecting and implementing robust backend systems, developing high-performance frontend applications, and handling real-time data workflows. This role requires hands-on technical expertise, strong problem-solving skills, and the ability to collaborate effectively across teams. You will be a key contributor in designing and developing data-intensive applications, managing sessions, enabling real-time analytics, implementing bulk operations, and delivering seamless user experiences. Additionally, your experience in browser extension development will be highly valued. Key Responsibilities ∙ Application Development & Architecture oDesign, develop, and deploy scalable backend services using Node.js (ES6+) , Express.js , and MongoDB . oArchitect and maintain high-performance APIs, data pipelines, and real-time processing systems leveraging Redis , RabbitMQ , and ClickHouse . oBuild and maintain modern frontend applications using Vue 3 (Options API) , Quasar v2 , Vite , and Pinia . ∙ Feature Development & Optimization oDevelop interactive, user-friendly, and responsive UIs with smooth navigation using Vue Router 4 . oImplement session management, authentication flows, and role-based access control. oHandle bulk operations and large datasets efficiently. oBuild real-time dashboards and analytics features . ∙ Browser Extensions oDevelop and maintain Chrome extensions using Quasar BEX to extend application functionality directly within the browser. ∙ Workflow & Tools oIntegrate APIs and external services with Ky for streamlined communication. oManage date/time and localization with MomentJS . oCreate interactive onboarding and guided user flows with shepherd.js . oImplement drag-and-drop functionality using vuedraggable . oParse and process large CSV datasets with papaparse . ∙ Collaboration & Leadership oWork closely with cross-functional teams to translate business requirements into technical solutions. oProvide technical leadership, mentoring, and code reviews for junior developers. oEnsure best practices in code quality, testing, performance optimization, and documentation. oCollaborate with stakeholders and non-technical team members to align product development with business goals. Key Skills & Technologies ∙ Backend: Node.js (ES6+), Express.js, MongoDB, Redis, RabbitMQ, ClickHouse ∙ Frontend: Vue 3 (Options API), Quasar v2, Vite, Pinia, Vue Router 4 ∙ Tools & Libraries: Ky, MomentJS, shepherd.js, vuedraggable, papaparse ∙ Other Expertise: Chrome Extension Development (Quasar BEX), real-time analytics, session management, bulk operations Qualifications & Requirements ∙Bachelor’s or Master’s degree in Computer Science, Engineering, or related field (or equivalent practical experience). ∙6+ years of proven experience as a Full-Stack Developer delivering production-ready applications. ∙Strong understanding of data-driven workflows, distributed systems, and real-time processing . ∙Demonstrated ability to work independently with minimal guidance while also thriving in team environments. ∙Strong problem-solving, debugging, and analytical skills. ∙Excellent communication and collaboration skills, with the ability to interact effectively with both technical and non-technical stakeholders. Preferred Qualifications ∙Experience working with high-traffic, enterprise-scale applications . ∙Knowledge of microservices architecture and containerization (Docker/Kubernetes). ∙Familiarity with CI/CD pipelines and automated testing frameworks. ∙Exposure to cloud platforms (AWS, GCP, Azure). Job Type: Full-time Application Question(s): Do you have experience in Node.js (ES6+), Express.js, MongoDB, Redis, RabbitMQ, ClickHouse? Do you have experience in Vue 3 (Options API), Quasar v2, Vite, Pinia, Vue Router 4 ? Do you have experience working with high-traffic, enterprise-scale applications? Experience: Full-stack web development: 6 years (Required) Work Location: In person

Posted 3 days ago

Apply

9.0 years

0 Lacs

thiruvananthapuram

On-site

9 - 12 Years 1 Opening Trivandrum Role description Role Proficiency: Systematically develops and promotes technology solutions ensuring the developed solution meets both functional and non – functional requirements. Outcomes: Develop and promote technical solutions which support the business requirements within area of expertise. Ensures IT requirements are met and service quality maintained when introducing new services. Considers the cost effectiveness of proposed solution(s). Set FAST goals and provide feedback to FAST goals of mentees Innovative and technically sound for project analysis in depth. Uphold the standards and best practices by adhering to them in his/her work as well as by implementing them in the team’s work by reviewing and monitoring. Provide innovative contribution within the team by coming up with ideas to automate repetitive work. Able to mentor Developers in such a way that they can progress to the next level of growth. Conduct peer reviews and demand high quality standards for the reviewed deliverables. Conduct technical assessments for hiring candidates to Developer roles. Measures of Outcomes: Adherence to engineering process and standards (coding standards) Defined productivity standards for project Schedule Adherence Mandatory Trainings/Certifications Innovativeness (In terms of how many new ideas/thought processes/standards/best practices he/she has come up with) Maintain quality standards for individual and team Adhere to project schedules for individual and team Number of technical issues uncovered during the execution of the project Number of defects in the code Number of defects post delivery Number of noncompliance issues On time completion of mandatory compliance trainings Adhere to organizational policies and processes Outputs Expected: Code: Independently develop code for above Maintain best coding and engineering practices Configure: Implement and monitor configuration process Test: Create and review unit test cases scenarios and execution 100% code coverage for unit testing Documentation: Sign off templates checklists guidelines standards for design/process/development Sign off deliverable documents – design documentation requirements test cases and results Design: Creation of design LLD architecture for Applications Features Business Components and Data Models Interface with Customer: Proactively influence customer thought process Consider NPS Score for customer and delivery performance Manage Project: Contribute to module level development Manage knowledge: Consume and contribute to project related documents share point libraries and client universities Creation of knowledge sharing assets Assists others in resolving complex technical problems:: Manage all aspects of problem management activities investigating the root cause of problems and recommend SMART (specific measurable achievable realistic timely) solutions Development and review of Standards & Documentation:: Maintenance of software process improvement activities; communicating to a range of individuals teams and other bodies. Skill Examples: Proactively identify solutions for technical issues Ability to maintain technical evaluation procedures Ability to estimate project effort based on the requirement Perform and evaluate test results against product specifications Break down complex problems into logical components Interface with other teams designers and other parallel practices Set goals for self and team. Provide feedback to team members Create and articulate impactful technical presentations Follow high level of business etiquette in emails and other business communication Drive conference calls with customers and answer customer questions Proactively ask for and offer help Ability to work under pressure determine dependencies risks facilitate planning; handling multiple tasks. Build confidence with customers by meeting the deliverables on time with a quality product Knowledge Examples: Deep level proficiency in the specialist area. Proficiency in technology stacks Appropriate software programs / modules Programming languages DBMS Operating Systems and software platforms SDLC Integrated development environment (IDE) Agile – Scrum or Kanban Methods Knowledge of customer domain and sub-domain where problem is solved. Knowledge of new technologies (e.g. Data science AI/ML IoT big data and cloud platform etc RDBMS and NOSQL Deep knowledge of architecting solutions and applications on cloud-based infrastructures. Additional Comments: We are looking for a highly skilled Lead Software Engineer with 10+ years of hands-on development experience, a strong foundation in object-oriented and functional programming, and a deep understanding of modern backend architecture. The ideal candidate will be passionate about building scalable, reliable services and mentoring engineering teams, while remaining technically hands-on. Key Responsibilities: Lead the design, development, and maintenance of scalable backend services and APIs. Champion best practices in object-oriented and functional programming. Drive architecture decisions related to microservices, cloud deployment, and distributed systems. Collaborate cross-functionally with product managers, designers, and other engineers to deliver high-quality software in an Agile environment. Contribute to and optimize CI/CD pipelines, and support containerized deployments using Docker and Kubernetes. Guide and mentor other engineers while contributing to the team’s technical growth. Ensure software quality through code reviews, unit testing, and integration testing. Communicate effectively with both technical and non-technical stakeholders. Must-Have Qualifications: 10+ years of software engineering experience, with significant time spent in backend development. Strong understanding of object-oriented and functional programming principles. Experience designing and implementing RESTful APIs. Proven experience with Java or Kotlin in a production environment. Solid knowledge of microservices architecture and cloud platforms (AWS, GCP, or Azure). Hands-on experience with CI/CD, Docker, and Kubernetes. Strong problem-solving skills and ability to work in a fast-paced Agile/Scrum environment. Excellent communication and teamwork skills. Skills Java,Restful Api,CI/CD About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 3 days ago

Apply

4.0 years

0 Lacs

india

Remote

Data Engineer is required to be proactive in designing, building and maintaining data systems. The role also requires cross-functional collaborations and ensure highest standards of data quality and performance by extending his expertise in data engineering, data architecture, pipeline creation and big data technologies. Responsibilities ● Understand the values and vision of the organization ● Protect the Intellectual Property ● Adhere to all the policies and procedures ● Design, develop, and maintain scalable data pipelines for data ingestion, processing and storage. ● Build and optimize data architectures and data models for efficient data storage and retrieval. ● Develop ETL processes to transform and load data from various sources into data warehouses and data lakes. ● Ensure data integrity, quality, and security across all data systems. ● Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions that meet business needs. ● Monitor and troubleshoot data pipelines and workflows to ensure high availability and performance. ● Document data processes, architectures, and data flow diagrams. ● Implement and maintain data integration solutions using industry-standard tools and technologies (e.g., Apache Spark, Kafka, Airflow). Essential Skills Job ● Expertise on Data Integration, processing & Storage. ● Expertise on Data optimization architecture, data process and data flow. ● Knowledge of Data integration tools like Apache Spark, Kafka & Airflow. ● Proficiency in SQL and at least one programming language (e.g., Python, Scala). ● Experience with cloud data platforms (e.g., AWS, Azure, GCP) and their data services. ● Experience with data visualization tools (e.g., Tableau, Power BI). Personal ● Excellent communication and interpersonal skills, with the ability to engage with all levels of employees and management. ● Collaborative approach to effectively present and advocate for quick design solutions. ● Stay updated on the latest design trends, tools, and technologies, bringing innovative ideas to enhance the product experience. ● A proactive approach to problem solving, with a focus on delivering exceptional customer satisfaction. Preferred Skills Job ● Strong knowledge of data management, processing, Architecture and design. ● Sound knowledge of data integration tools like Apache Spark, Kafka & Airflow. ● Identify and resolve customer challenges with a focus on providing high-quality service and solutions. Personal • Demonstrate proactive thinking • Strong communication and collaboration skills. • Should have strong interpersonal relations, expert business acumen and mentoring skills • Strong problem-solving skills with attention to detail. • Have the ability to work under stringent deadlines and demanding client conditions. • Strong analytical and problem-solving skills. • Ability to work independently and as part of a team. Other Relevant Information ● Bachelor’s degree in computer science, Information Technology, or a related field. ● Minimum 4 years of experience in data engineering & architecture. ● This role offers the flexibility of working remotely in India. Regards Sahiba 8296043355

Posted 3 days ago

Apply

3.0 years

0 Lacs

delhi

On-site

About us Bain & Company is a global management consulting that helps the world’s most ambitious change makers define the future. Across 65 offices in 40 countries, we work alongside our clients as one team with a shared ambition to achieve extraordinary results, outperform the competition and redefine industries. Since our founding in 1973, we have measured our success by the success of our clients, and we proudly maintain the highest level of client advocacy in the industry. In 2004, the firm established its presence in the Indian market by opening the Bain Capability Center (BCC) in New Delhi. The BCC is now known as BCN (Bain Capability Network) with its nodes across various geographies. BCN is an integral and largest unit of (ECD) Expert Client Delivery. ECD plays a critical role as it adds value to Bain's case teams globally by supporting them with analytics and research solutioning across all industries, specific domains for corporate cases, client development, private equity diligence or Bain intellectual property. The BCN comprises of Consulting Services, Knowledge Services and Shared Services. Who you will work with This role is based out of the Visualization Centre of Excellence (CoE) at the BCN. Visualization CoE works closely with global Bain case teams, Bain Partners and end-clients providing them data analytics and business intelligence support using advanced data analysis and visualization tools (e.g., SQL, Python, Azure, AWS, Tableau, PowerBI, Alteryx etc.). The CoE is a one-stop shop for all case requests related to converting data into insightful visualizations tools (e.g., survey analytics, leadership KPI dashboards, etc). What you’ll do Responsible for designing, building, and maintaining infrastructure and systems that enable the Extraction, Transformation, and Storage of large datasets for analysis Work with Bain team or end-clients as an expert on specific platform/tool/language (Azure/AWS/Python/SQL etc.) in individual capacity or lead teams of analysts to design and deliver impactful insights Support project lead in end-to-end handling of the entire process, i.e., requirement gathering, data cleaning, processing and automation Investigate data to identify potential issues within ETL pipelines, notify end-users and propose adequate solutions Ensure that data architecture is scalable and maintainable Apply knowledge of data analysis tools like Azure Data Bricks, AWS Athena, Alteryx, etc. to support case teams with analysis of KPIs Prepare documentation for further reference The working style of the team would be to support product development; hence the pipelines and algorithm built should be scalable and automated Support case leads in managing internal and external stakeholders, across instruments and workstreams to provide expertise in data management and tool expertise Work under the guidance of a Team Lead / Team Manager / Sr. Team Manager, playing a key role in driving the team’s overall answer and final materials, client communication, work planning, and team management May also take responsibility for assigning work streams to Analysts, monitor workload; Provides tool based technical expertise to the junior team members when required May deploy Data engineering solutions using CI/CD pipelines (GitHub, Cloud servers using Azure/AWS) May lead client/ case team calls and communicate data, knowledge, insights and actionable next steps to the case team; relay implications to his/her own internal team Keep abreast of new and current statistical, database and data warehousing tools & techniques About you Candidate should be a Graduate/Post-Graduate from top-tier College with strong academic records and with 3-5 years of relevant work experience in areas related to Data Management, Business Intelligence or Business Analytics. Hands-on experience in data handling and ETL workstreams Concentration in a quantitative discipline such as Statistics, Mathematics, Engineering, Computer Science, Econometrics, Business Analytics, or Market Research is strongly preferred Minimum 2+ years of experience in Database development on Cloud based platforms such as AWS/Azure Working Experience with Python and Advanced SQL queries, Stored procedures, query performance tuning, index maintenance, etc., Experience of data modeling, data warehousing principles Experience on ETL tools in anyone of the tools like Azure Datafactory, Databricks, AWS Glue etc. Experience in reading data from different data sources including on premise data servers, cloud services and several file formats Understanding of database architecture Ability to prioritize projects, manage multiple competing priorities and drive projects to completion under tight deadlines Should be a motivated and collaborative team player, who is a role-model and at-cause individual within the team and office Excellent oral and written communication skills including the ability to communicate effectively with both technical and non-technical senior stakeholders Ability to prioritize projects, manage multiple competing priorities and drive projects to completion under tight deadlines Good to Have: Exposure to CI/CD pipelines: GitHub, Docker, and containerization is a plus Candidates with advanced certifications in AWS and Azure will be preferred Experience on Snowflake/GCP is a plus What makes us a great place to work We are proud to be consistently recognized as one of the world's best places to work, a champion of diversity and a model of social responsibility. We are currently ranked the #1 consulting firm on Glassdoor’s Best Places to Work list, and we have maintained a spot in the top four on Glassdoor's list for the last 12 years. We believe that diversity, inclusion and collaboration is key to building extraordinary teams. We hire people with exceptional talents, abilities and potential, then create an environment where you can become the best version of yourself and thrive both professionally and personally. We are publicly recognized by external parties such as Fortune, Vault, Mogul, Working Mother, Glassdoor and the Human Rights Campaign for being a great place to work for diversity and inclusion, women, LGBTQ and parents.

Posted 3 days ago

Apply

2.0 - 5.0 years

5 - 13 Lacs

delhi

On-site

Role: Cloud Network Security Professional Qualification: BE/B.Tech/MCA or Equivalent Qualification in Computer Science Experience: 2 to 5 Years Job Description: · Excellent knowledge of various services offered by Public Cloud platforms AWS, GCP, Azure etc· Understanding of comprehensive security programs, including technologies and tools, architectures and network and application design and policies/business aspects of risk.· Understanding of Layer 3 Routing viz., BGP, OSPF protocols, Layer 2 protocols (LACP, VLAN, STP, Trunking etc.), Security features (IPSEC VPN, Stateful Filtering)· Understanding of all aspects of firewall administration such as hardware, operating system, encryption tunnels, VPN, day-to- operations of firewall rule sets.· Should Deploy security solutions and standards based on requirements, best practices, and technical knowledge· Network security planning and network engineering.· Implement and deploy data and network security projects, Change Management, ongoing maintenance activities, and network engineering support.· Configure firewall rules, IPS, routing, and VPNs· Execute service requests and document changes (such as new environment builds, major changes, version upgrades).· Analyze network perimeter data, flow, packet filtering, proxy firewalls, and IPS/IDS to create and implement a concrete plan of action to harden the defensive posture· Knowledge of switching technologies and concepts such as STP, VRRP, VTP, Stacking, LLDP, L2 Security & 2/3 Tier architecture, tunnelling, L2/L3, firewalls, IDS.· Theoretical and practical knowledge of the operations of secure Email systems, secure DNS, DHCP, networking technologies including routers, switches, AAA, firewalls and VPN.· Identify problems, investigate them and activate quick solutions to minimize downtime. Analysing network errors and testing potential solutions. identify and collect relevant information about errors, customize event and security logs, to identify problems early and follow escalation framework.· Rely on soft skills such teamwork and communication skills to be successful in various work environments.· Providing Technical support to internal teams and external clients whenever required.· Capable to support Field site /deployment support. Skills Required: · Experience troubleshooting networking issues using several tools (traceroute, mtr, ping, iperf, dig/nslookup, tcpdump/Wireshark and related).· Good Knowledge of network security (SSL/TLS, Network- and Web Application Firewalls, Intrusion Detection and Prevention Services).· Good Knowledge managing domain transfers, records and DNS security (DNSSEC and DNS Filtering).· Experience with Networking and troubleshooting (HTTP, TCP/IP, DNS, Routing and Switching, Load Balancing)· Good OS knowledge oriented to maintenance and administrative purposes (Windows or Linux).· Knowledge/ Troubleshooting experience of OSPF/BGP routing protocol.· Good understanding of security best practices. Certifications: · CISSP, CCNP, AWS Networking, AWS Security or equivalent certification required. Job Types: Full-time, Permanent Pay: ₹500,000.00 - ₹1,300,000.00 per year Benefits: Health insurance Provident Fund Ability to commute/relocate: Delhi, Delhi: Reliably commute or planning to relocate before starting work (Required) Experience: Cloud security: 2 years (Required) Networking: 2 years (Required) Work Location: In person

Posted 3 days ago

Apply

1.0 years

6 - 8 Lacs

delhi

On-site

Position Overview We are looking for a Data Scientist with 1 year of hands-on experience and strong foundational knowledge in Machine Learning (ML), Deep Learning (DL), Natural Language Processing (NLP), Generative AI, Transformers, Retrieval-Augmented Generation (RAG), and Large Language Models (LLMs). The ideal candidate will contribute to building intelligent systems, experimenting with state-of-the-art AI techniques, and deploying models that solve real-world problems. Key Responsibilities ● Apply machine learning and deep learning techniques to solve classification, regression, recommendation, and prediction problems. ● Work with NLP tasks such as text classification, sentiment analysis, named entity recognition, and keyphrase extraction. ● Fine-tune and experiment with transformer-based architectures (BERT, GPT, LLaMA, etc.) for custom use cases. ● Contribute to the design and implementation of RAG pipelines integrating LLMs with external knowledge bases. ● Assist in developing Gen-AI powered applications (e.g., chatbots, summarizers, document assistants). ● Build, test, and evaluate models using real-world datasets, ensuring accuracy, scalability, and robustness. ● Write efficient, production-ready code in Python and leverage frameworks such as PyTorch, TensorFlow, Hugging Face Transformers, LangChain, or similar. ● Collaborate with product, engineering, and business teams to translate research into deployable solutions. ● Document research findings, experiments, and implementation details for reproducibility. Required Qualifications ● Bachelor’s degree in Computer Science, Data Science, AI/ML, or related field. ● 1 year of hands-on experience in applied ML/DL projects or internships. ● Strong proficiency in Python and libraries such as NumPy, pandas, scikit-learn, TensorFlow, or PyTorch. ● Exposure to transformer models (BERT, GPT, T5, LLaMA, etc.). ● Familiarity with NLP techniques (tokenization, embeddings, vector search, semantic similarity). ● Understanding of Generative AI concepts and working knowledge of LLMs. ● Experience or exposure to RAG frameworks (LangChain, LlamaIndex, Haystack, etc.). ● Strong SQL skills and ability to work with structured/unstructured datasets. ● Solid analytical, problem-solving, and communication skills. Preferred Qualifications ● Exposure to cloud platforms (AWS, Azure, GCP) and GPU-based training. ● Knowledge of vector databases (Pinecone, Weaviate, FAISS, Milvus). ● Experience with chatbot development or conversational AI systems. ● Contribution to open-source projects or research publications in AI. What We Offer ● Opportunity to work with cutting-edge AI/Gen-AI technologies. ● Continuous learning through hands-on projects, mentorship, and R&D initiatives. ● Collaborative and innovative environment with career growth in AI/ML engineering & research. ● Exposure to real-world applications in NLP, Generative AI, and advanced data science Job Types: Full-time, Permanent Pay: ₹600,000.00 - ₹800,000.00 per year Benefits: Health insurance Provident Fund Work Location: In person Speak with the employer +91 7428084294

Posted 3 days ago

Apply

0 years

1 - 5 Lacs

india

On-site

Key Roles & Responsibilities: Requirement Gathering & Problem Definition – Work with business leaders, clients, and stakeholders to define use cases and map them to AI capabilities. Solution Design – Architect scalable AI/ML solutions (cloud, edge, or hybrid) that integrate with existing enterprise systems. Technology Selection – Decide on frameworks (TensorFlow, PyTorch, Hugging Face), cloud platforms (AWS Sagemaker, Azure ML, GCP Vertex AI), and data pipelines. System Integration – Ensure AI models can seamlessly interact with APIs, ERP/CRM, IoT devices, or industry-specific platforms. Scalability & Performance – Architect for high availability, distributed training, inference optimization, and cost efficiency. Security & Compliance – Embed data governance, ethical AI principles, and compliance with standards (GDPR, HIPAA, ISO). Prototype to Production – Lead POC development and transition to full-scale deployment. Client-Facing Role – Act as the AI technical consultant bridging business requirements and AI implementation. Job Type: Full-time Pay: ₹16,251.07 - ₹46,879.89 per month Work Location: In person

Posted 3 days ago

Apply

2.0 years

3 - 7 Lacs

mohali

On-site

We are building a next-generation team collaboration and communication platform similar to Slack, DingTalk, and JoyTeam.ai . To deliver a fast, secure, and reliable product , we are looking for a Software Test Engineer who will be responsible for designing and executing test strategies, ensuring high-quality releases across web, mobile, and backend systems . Key Responsibilities Design, write, and execute manual and automated test cases for web, mobile, and backend services. Perform functional, regression, performance, load, and integration testing to ensure platform stability. Test real-time features such as chat, video calls, notifications, and file sharing across devices and browsers. Automate repetitive scenarios using Selenium, Cypress, Appium, or Playwright . Conduct API testing (Postman, REST Assured) and validate data integrity. Collaborate with developers and product managers to define acceptance criteria . Document and track defects using tools like Jira, Azure DevOps, or Trello . Work within CI/CD pipelines to integrate test automation. Contribute to security and usability testing for enterprise readiness. Ensure the platform is tested for scalability and reliability under heavy load . Skills & Qualifications Bachelor’s degree in Computer Science, IT, or related field . 2–5 years of experience in software testing / QA engineering . Strong understanding of QA methodologies, SDLC, and Agile practices . Hands-on experience with test automation frameworks . Proficiency in at least one scripting/programming language (Python, Java, or JavaScript). Knowledge of databases (SQL/NoSQL) for validation. Strong communication and problem-solving skills. Preferred Experience Testing real-time SaaS platforms (chat, video, collaboration, or messaging apps). Familiarity with performance testing tools (JMeter, Locust, k6). Experience in multi-platform testing (iOS, Android, Web). Exposure to cloud environments (AWS, GCP, Azure). Job Type: Full-time Pay: ₹25,511.07 - ₹64,819.60 per month Work Location: In person Speak with the employer +91 7087111454

Posted 3 days ago

Apply

4.0 years

18 Lacs

bhubaneswar, odisha, india

Remote

Experience : 4.00 + years Salary : INR 1800000.00 / year (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Suite Solvers) (*Note: This is a requirement for one of Uplers' client - An Atlanta based IT Services and IT Consulting Company) What do you need for this opportunity? Must have skills required: Docker, Vector Database, Fintech, Testing and deployment, Data Science, Artificial Intelligence (AI), Large Language Model APIs (LLM APIs), LLM APIs, Large Language Model (LLM), Prompt Engineering, FastAPI / Flask, Cloud An Atlanta based IT Services and IT Consulting Company is Looking for: About The Job SuiteSolvers is a boutique consulting firm that helps mid-market companies transform and scale through smart ERP implementations, financial automation, and operational strategy. We specialize in NetSuite and Acumatica, and we’re building tools that make finance and operations more intelligent and less manual. Our clients range from high-growth startups to billion-dollar enterprises. We’re hands-on, fast-moving, and results-driven—our work shows up in better decisions, faster closes, cleaner audits, and smarter systems. We’re not a bloated agency. We’re a small team with high standards. If you like solving real business problems with clean data pipelines, smart automation, and the occasional duct-tape hack that gets the job done—this might be your kind of place. We are looking for a Data Engineer. Essential Technical Skills AI/ML (Required) 2+ years hands-on experience with LLM APIs (OpenAI, Anthropic, or similar) Production deployment of at least one AI system that's currently running in production LLM framework experience with LangChain, CrewAI, or AutoGen (any one is sufficient) Function calling/tool use - ability to build AI systems that can call external APIs and functions Basic prompt engineering - understanding of techniques like Chain-of-Thought and ReAct patterns Python Development (Required) 3+ years Python development with strong fundamentals API development using Flask or FastAPI with proper error handling Async programming - understanding of async/await patterns for concurrent operations Database integration - working with PostgreSQL, MySQL, or similar relational databases JSON/REST APIs - consuming and building REST services Production Systems (Required) 2+ years building production software that serves real users Error handling and logging - building robust systems that handle failures gracefully Basic cloud deployment - experience with AWS, Azure, or GCP (any one platform) Git/version control - collaborative development using Git workflows Testing fundamentals - unit testing and integration testing practices Business Process (Basic Required) User requirements - ability to translate business needs into technical solutions Data quality - recognizing and handling dirty/inconsistent data Exception handling - designing workflows for edge cases and errors Professional Experience (Minimum) Software Engineering 3+ years total software development experience 1+ production AI project - any AI/ML system deployed to production (even simple ones) Cross-functional collaboration - worked with non-technical stakeholders Problem-solving - demonstrated ability to debug and resolve complex technical issues Communication & Collaboration Technical documentation - ability to write clear technical docs and code comments Stakeholder communication - explain technical concepts to business users Independent work - ability to work autonomously with minimal supervision Learning agility - quickly pick up new technologies and frameworks Educational Background (Any One) Formal Education Bachelor's degree in Computer Science, Engineering, or related technical field OR equivalent experience - demonstrable technical skills through projects/work Alternative Paths Coding bootcamp + 2+ years professional development experience Self-taught with strong portfolio of production projects Technical certifications (AWS, Google Cloud, etc.) + relevant experience [nice to have] Demonstrable Skills (Portfolio Requirements) Must Show Evidence Of One working AI application - GitHub repo or live demo of LLM integration Python projects - code samples showing API development and data processing Production deployment - any application currently running and serving users Problem-solving ability - examples of debugging complex issues or optimizing performance Nice to Have (Not Required) Financial services or fintech experience Vector databases (Pinecone, Weaviate) experience Docker/containerization knowledge Advanced ML/AI education or certifications How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 days ago

Apply

4.0 years

18 Lacs

cuttack, odisha, india

Remote

Experience : 4.00 + years Salary : INR 1800000.00 / year (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Suite Solvers) (*Note: This is a requirement for one of Uplers' client - An Atlanta based IT Services and IT Consulting Company) What do you need for this opportunity? Must have skills required: Docker, Vector Database, Fintech, Testing and deployment, Data Science, Artificial Intelligence (AI), Large Language Model APIs (LLM APIs), LLM APIs, Large Language Model (LLM), Prompt Engineering, FastAPI / Flask, Cloud An Atlanta based IT Services and IT Consulting Company is Looking for: About The Job SuiteSolvers is a boutique consulting firm that helps mid-market companies transform and scale through smart ERP implementations, financial automation, and operational strategy. We specialize in NetSuite and Acumatica, and we’re building tools that make finance and operations more intelligent and less manual. Our clients range from high-growth startups to billion-dollar enterprises. We’re hands-on, fast-moving, and results-driven—our work shows up in better decisions, faster closes, cleaner audits, and smarter systems. We’re not a bloated agency. We’re a small team with high standards. If you like solving real business problems with clean data pipelines, smart automation, and the occasional duct-tape hack that gets the job done—this might be your kind of place. We are looking for a Data Engineer. Essential Technical Skills AI/ML (Required) 2+ years hands-on experience with LLM APIs (OpenAI, Anthropic, or similar) Production deployment of at least one AI system that's currently running in production LLM framework experience with LangChain, CrewAI, or AutoGen (any one is sufficient) Function calling/tool use - ability to build AI systems that can call external APIs and functions Basic prompt engineering - understanding of techniques like Chain-of-Thought and ReAct patterns Python Development (Required) 3+ years Python development with strong fundamentals API development using Flask or FastAPI with proper error handling Async programming - understanding of async/await patterns for concurrent operations Database integration - working with PostgreSQL, MySQL, or similar relational databases JSON/REST APIs - consuming and building REST services Production Systems (Required) 2+ years building production software that serves real users Error handling and logging - building robust systems that handle failures gracefully Basic cloud deployment - experience with AWS, Azure, or GCP (any one platform) Git/version control - collaborative development using Git workflows Testing fundamentals - unit testing and integration testing practices Business Process (Basic Required) User requirements - ability to translate business needs into technical solutions Data quality - recognizing and handling dirty/inconsistent data Exception handling - designing workflows for edge cases and errors Professional Experience (Minimum) Software Engineering 3+ years total software development experience 1+ production AI project - any AI/ML system deployed to production (even simple ones) Cross-functional collaboration - worked with non-technical stakeholders Problem-solving - demonstrated ability to debug and resolve complex technical issues Communication & Collaboration Technical documentation - ability to write clear technical docs and code comments Stakeholder communication - explain technical concepts to business users Independent work - ability to work autonomously with minimal supervision Learning agility - quickly pick up new technologies and frameworks Educational Background (Any One) Formal Education Bachelor's degree in Computer Science, Engineering, or related technical field OR equivalent experience - demonstrable technical skills through projects/work Alternative Paths Coding bootcamp + 2+ years professional development experience Self-taught with strong portfolio of production projects Technical certifications (AWS, Google Cloud, etc.) + relevant experience [nice to have] Demonstrable Skills (Portfolio Requirements) Must Show Evidence Of One working AI application - GitHub repo or live demo of LLM integration Python projects - code samples showing API development and data processing Production deployment - any application currently running and serving users Problem-solving ability - examples of debugging complex issues or optimizing performance Nice to Have (Not Required) Financial services or fintech experience Vector databases (Pinecone, Weaviate) experience Docker/containerization knowledge Advanced ML/AI education or certifications How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 days ago

Apply

3.0 years

3 - 10 Lacs

rājkot

On-site

Profile: Senior Python Backend Developer Experience: 3year+ Skills : FastAPI / Django / Flask, Cloud (AWS, GCP, Azure), DevOps (Docker, Kubernetes, Terraform), SQL (PostgreSQL/MySQL) & NoSQL (Redis, Elasticsearch), CI/CD Pipelines & Testing, etc... Salary: Up To 90k Location: Rajkot Apply Now - career.itjobsvale@gmail.com +91 7211188810 Job Type: Full-time Pay: ₹30,000.00 - ₹90,000.00 per month Benefits: Flexible schedule Work Location: In person Speak with the employer +91 7211188810

Posted 3 days ago

Apply

4.0 years

18 Lacs

kolkata, west bengal, india

Remote

Experience : 4.00 + years Salary : INR 1800000.00 / year (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Suite Solvers) (*Note: This is a requirement for one of Uplers' client - An Atlanta based IT Services and IT Consulting Company) What do you need for this opportunity? Must have skills required: Docker, Vector Database, Fintech, Testing and deployment, Data Science, Artificial Intelligence (AI), Large Language Model APIs (LLM APIs), LLM APIs, Large Language Model (LLM), Prompt Engineering, FastAPI / Flask, Cloud An Atlanta based IT Services and IT Consulting Company is Looking for: About The Job SuiteSolvers is a boutique consulting firm that helps mid-market companies transform and scale through smart ERP implementations, financial automation, and operational strategy. We specialize in NetSuite and Acumatica, and we’re building tools that make finance and operations more intelligent and less manual. Our clients range from high-growth startups to billion-dollar enterprises. We’re hands-on, fast-moving, and results-driven—our work shows up in better decisions, faster closes, cleaner audits, and smarter systems. We’re not a bloated agency. We’re a small team with high standards. If you like solving real business problems with clean data pipelines, smart automation, and the occasional duct-tape hack that gets the job done—this might be your kind of place. We are looking for a Data Engineer. Essential Technical Skills AI/ML (Required) 2+ years hands-on experience with LLM APIs (OpenAI, Anthropic, or similar) Production deployment of at least one AI system that's currently running in production LLM framework experience with LangChain, CrewAI, or AutoGen (any one is sufficient) Function calling/tool use - ability to build AI systems that can call external APIs and functions Basic prompt engineering - understanding of techniques like Chain-of-Thought and ReAct patterns Python Development (Required) 3+ years Python development with strong fundamentals API development using Flask or FastAPI with proper error handling Async programming - understanding of async/await patterns for concurrent operations Database integration - working with PostgreSQL, MySQL, or similar relational databases JSON/REST APIs - consuming and building REST services Production Systems (Required) 2+ years building production software that serves real users Error handling and logging - building robust systems that handle failures gracefully Basic cloud deployment - experience with AWS, Azure, or GCP (any one platform) Git/version control - collaborative development using Git workflows Testing fundamentals - unit testing and integration testing practices Business Process (Basic Required) User requirements - ability to translate business needs into technical solutions Data quality - recognizing and handling dirty/inconsistent data Exception handling - designing workflows for edge cases and errors Professional Experience (Minimum) Software Engineering 3+ years total software development experience 1+ production AI project - any AI/ML system deployed to production (even simple ones) Cross-functional collaboration - worked with non-technical stakeholders Problem-solving - demonstrated ability to debug and resolve complex technical issues Communication & Collaboration Technical documentation - ability to write clear technical docs and code comments Stakeholder communication - explain technical concepts to business users Independent work - ability to work autonomously with minimal supervision Learning agility - quickly pick up new technologies and frameworks Educational Background (Any One) Formal Education Bachelor's degree in Computer Science, Engineering, or related technical field OR equivalent experience - demonstrable technical skills through projects/work Alternative Paths Coding bootcamp + 2+ years professional development experience Self-taught with strong portfolio of production projects Technical certifications (AWS, Google Cloud, etc.) + relevant experience [nice to have] Demonstrable Skills (Portfolio Requirements) Must Show Evidence Of One working AI application - GitHub repo or live demo of LLM integration Python projects - code samples showing API development and data processing Production deployment - any application currently running and serving users Problem-solving ability - examples of debugging complex issues or optimizing performance Nice to Have (Not Required) Financial services or fintech experience Vector databases (Pinecone, Weaviate) experience Docker/containerization knowledge Advanced ML/AI education or certifications How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 days ago

Apply

3.0 - 4.0 years

6 - 7 Lacs

ahmedabad

On-site

We are looking for a highly skilled and experienced Python Developer with 3 - 4 years of hands-on experience in designing, developing, and maintaining scalable software solutions. The ideal candidate should have strong expertise in backend development using Python frameworks, API integration, database handling, and version control.We are looking for a highly skilled and experienced Python Developer with 4 years of hands-on experience in designing, developing, and maintaining scalable software solutions. The ideal candidate should have strong expertise in backend development using Python frameworks, API integration, database handling, and version control. Key Responsibilities :- - Design and develop scalable, robust, and secure backend services using Python. - Build and integrate RESTful APIs and third-party services. - Collaborate with front-end developers, UI/UX designers, and product managers to deliver high-quality solutions. - Write clean, efficient, and reusable code following best practices. - Maintain and optimize existing applications for performance and scalability. - Participate in code reviews, testing, and debugging. - Work with databases such as PostgreSQL, MySQL, or MongoDB. - Implement CI/CD pipelines and version control (Git). - Contribute to system architecture and technical documentation. Required Skills:- - Strong proficiency in Python 3.x - Experience with one or more Python frameworks: Django, Flask, or FastAPI - Good understanding of RESTful APIs and web services - Experience with relational and NoSQL databases (PostgreSQL, MySQL, MongoDB) - Familiarity with cloud services (AWS, Azure, or GCP) is a plus - Experience with Git and version control workflows - Knowledge of Docker and containerization is an advantage - Solid understanding of OOP, design patterns, and software development principles - Familiarity with Agile/Scrum development methodologies Nice to Have: - Experience with asynchronous programming (AsyncIO, Celery) - Exposure to front-end technologies (JavaScript, React, or Angular) - Experience with unit testing and test-driven development (TDD) - DevOps knowledge or experience with CI/CD tools If you are looking for new opportunity, please share CV on :"hr@appxcellency.io" Job Type: Full-time Pay: ₹50,000.00 - ₹60,000.00 per month Benefits: Paid sick time Experience: Python: 3 years (Preferred) Work Location: In person

Posted 3 days ago

Apply

4.0 years

18 Lacs

guwahati, assam, india

Remote

Experience : 4.00 + years Salary : INR 1800000.00 / year (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Suite Solvers) (*Note: This is a requirement for one of Uplers' client - An Atlanta based IT Services and IT Consulting Company) What do you need for this opportunity? Must have skills required: Docker, Vector Database, Fintech, Testing and deployment, Data Science, Artificial Intelligence (AI), Large Language Model APIs (LLM APIs), LLM APIs, Large Language Model (LLM), Prompt Engineering, FastAPI / Flask, Cloud An Atlanta based IT Services and IT Consulting Company is Looking for: About The Job SuiteSolvers is a boutique consulting firm that helps mid-market companies transform and scale through smart ERP implementations, financial automation, and operational strategy. We specialize in NetSuite and Acumatica, and we’re building tools that make finance and operations more intelligent and less manual. Our clients range from high-growth startups to billion-dollar enterprises. We’re hands-on, fast-moving, and results-driven—our work shows up in better decisions, faster closes, cleaner audits, and smarter systems. We’re not a bloated agency. We’re a small team with high standards. If you like solving real business problems with clean data pipelines, smart automation, and the occasional duct-tape hack that gets the job done—this might be your kind of place. We are looking for a Data Engineer. Essential Technical Skills AI/ML (Required) 2+ years hands-on experience with LLM APIs (OpenAI, Anthropic, or similar) Production deployment of at least one AI system that's currently running in production LLM framework experience with LangChain, CrewAI, or AutoGen (any one is sufficient) Function calling/tool use - ability to build AI systems that can call external APIs and functions Basic prompt engineering - understanding of techniques like Chain-of-Thought and ReAct patterns Python Development (Required) 3+ years Python development with strong fundamentals API development using Flask or FastAPI with proper error handling Async programming - understanding of async/await patterns for concurrent operations Database integration - working with PostgreSQL, MySQL, or similar relational databases JSON/REST APIs - consuming and building REST services Production Systems (Required) 2+ years building production software that serves real users Error handling and logging - building robust systems that handle failures gracefully Basic cloud deployment - experience with AWS, Azure, or GCP (any one platform) Git/version control - collaborative development using Git workflows Testing fundamentals - unit testing and integration testing practices Business Process (Basic Required) User requirements - ability to translate business needs into technical solutions Data quality - recognizing and handling dirty/inconsistent data Exception handling - designing workflows for edge cases and errors Professional Experience (Minimum) Software Engineering 3+ years total software development experience 1+ production AI project - any AI/ML system deployed to production (even simple ones) Cross-functional collaboration - worked with non-technical stakeholders Problem-solving - demonstrated ability to debug and resolve complex technical issues Communication & Collaboration Technical documentation - ability to write clear technical docs and code comments Stakeholder communication - explain technical concepts to business users Independent work - ability to work autonomously with minimal supervision Learning agility - quickly pick up new technologies and frameworks Educational Background (Any One) Formal Education Bachelor's degree in Computer Science, Engineering, or related technical field OR equivalent experience - demonstrable technical skills through projects/work Alternative Paths Coding bootcamp + 2+ years professional development experience Self-taught with strong portfolio of production projects Technical certifications (AWS, Google Cloud, etc.) + relevant experience [nice to have] Demonstrable Skills (Portfolio Requirements) Must Show Evidence Of One working AI application - GitHub repo or live demo of LLM integration Python projects - code samples showing API development and data processing Production deployment - any application currently running and serving users Problem-solving ability - examples of debugging complex issues or optimizing performance Nice to Have (Not Required) Financial services or fintech experience Vector databases (Pinecone, Weaviate) experience Docker/containerization knowledge Advanced ML/AI education or certifications How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 days ago

Apply

4.0 years

18 Lacs

raipur, chhattisgarh, india

Remote

Experience : 4.00 + years Salary : INR 1800000.00 / year (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Suite Solvers) (*Note: This is a requirement for one of Uplers' client - An Atlanta based IT Services and IT Consulting Company) What do you need for this opportunity? Must have skills required: Docker, Vector Database, Fintech, Testing and deployment, Data Science, Artificial Intelligence (AI), Large Language Model APIs (LLM APIs), LLM APIs, Large Language Model (LLM), Prompt Engineering, FastAPI / Flask, Cloud An Atlanta based IT Services and IT Consulting Company is Looking for: About The Job SuiteSolvers is a boutique consulting firm that helps mid-market companies transform and scale through smart ERP implementations, financial automation, and operational strategy. We specialize in NetSuite and Acumatica, and we’re building tools that make finance and operations more intelligent and less manual. Our clients range from high-growth startups to billion-dollar enterprises. We’re hands-on, fast-moving, and results-driven—our work shows up in better decisions, faster closes, cleaner audits, and smarter systems. We’re not a bloated agency. We’re a small team with high standards. If you like solving real business problems with clean data pipelines, smart automation, and the occasional duct-tape hack that gets the job done—this might be your kind of place. We are looking for a Data Engineer. Essential Technical Skills AI/ML (Required) 2+ years hands-on experience with LLM APIs (OpenAI, Anthropic, or similar) Production deployment of at least one AI system that's currently running in production LLM framework experience with LangChain, CrewAI, or AutoGen (any one is sufficient) Function calling/tool use - ability to build AI systems that can call external APIs and functions Basic prompt engineering - understanding of techniques like Chain-of-Thought and ReAct patterns Python Development (Required) 3+ years Python development with strong fundamentals API development using Flask or FastAPI with proper error handling Async programming - understanding of async/await patterns for concurrent operations Database integration - working with PostgreSQL, MySQL, or similar relational databases JSON/REST APIs - consuming and building REST services Production Systems (Required) 2+ years building production software that serves real users Error handling and logging - building robust systems that handle failures gracefully Basic cloud deployment - experience with AWS, Azure, or GCP (any one platform) Git/version control - collaborative development using Git workflows Testing fundamentals - unit testing and integration testing practices Business Process (Basic Required) User requirements - ability to translate business needs into technical solutions Data quality - recognizing and handling dirty/inconsistent data Exception handling - designing workflows for edge cases and errors Professional Experience (Minimum) Software Engineering 3+ years total software development experience 1+ production AI project - any AI/ML system deployed to production (even simple ones) Cross-functional collaboration - worked with non-technical stakeholders Problem-solving - demonstrated ability to debug and resolve complex technical issues Communication & Collaboration Technical documentation - ability to write clear technical docs and code comments Stakeholder communication - explain technical concepts to business users Independent work - ability to work autonomously with minimal supervision Learning agility - quickly pick up new technologies and frameworks Educational Background (Any One) Formal Education Bachelor's degree in Computer Science, Engineering, or related technical field OR equivalent experience - demonstrable technical skills through projects/work Alternative Paths Coding bootcamp + 2+ years professional development experience Self-taught with strong portfolio of production projects Technical certifications (AWS, Google Cloud, etc.) + relevant experience [nice to have] Demonstrable Skills (Portfolio Requirements) Must Show Evidence Of One working AI application - GitHub repo or live demo of LLM integration Python projects - code samples showing API development and data processing Production deployment - any application currently running and serving users Problem-solving ability - examples of debugging complex issues or optimizing performance Nice to Have (Not Required) Financial services or fintech experience Vector databases (Pinecone, Weaviate) experience Docker/containerization knowledge Advanced ML/AI education or certifications How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 days ago

Apply

4.0 years

18 Lacs

amritsar, punjab, india

Remote

Experience : 4.00 + years Salary : INR 1800000.00 / year (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Suite Solvers) (*Note: This is a requirement for one of Uplers' client - An Atlanta based IT Services and IT Consulting Company) What do you need for this opportunity? Must have skills required: Docker, Vector Database, Fintech, Testing and deployment, Data Science, Artificial Intelligence (AI), Large Language Model APIs (LLM APIs), LLM APIs, Large Language Model (LLM), Prompt Engineering, FastAPI / Flask, Cloud An Atlanta based IT Services and IT Consulting Company is Looking for: About The Job SuiteSolvers is a boutique consulting firm that helps mid-market companies transform and scale through smart ERP implementations, financial automation, and operational strategy. We specialize in NetSuite and Acumatica, and we’re building tools that make finance and operations more intelligent and less manual. Our clients range from high-growth startups to billion-dollar enterprises. We’re hands-on, fast-moving, and results-driven—our work shows up in better decisions, faster closes, cleaner audits, and smarter systems. We’re not a bloated agency. We’re a small team with high standards. If you like solving real business problems with clean data pipelines, smart automation, and the occasional duct-tape hack that gets the job done—this might be your kind of place. We are looking for a Data Engineer. Essential Technical Skills AI/ML (Required) 2+ years hands-on experience with LLM APIs (OpenAI, Anthropic, or similar) Production deployment of at least one AI system that's currently running in production LLM framework experience with LangChain, CrewAI, or AutoGen (any one is sufficient) Function calling/tool use - ability to build AI systems that can call external APIs and functions Basic prompt engineering - understanding of techniques like Chain-of-Thought and ReAct patterns Python Development (Required) 3+ years Python development with strong fundamentals API development using Flask or FastAPI with proper error handling Async programming - understanding of async/await patterns for concurrent operations Database integration - working with PostgreSQL, MySQL, or similar relational databases JSON/REST APIs - consuming and building REST services Production Systems (Required) 2+ years building production software that serves real users Error handling and logging - building robust systems that handle failures gracefully Basic cloud deployment - experience with AWS, Azure, or GCP (any one platform) Git/version control - collaborative development using Git workflows Testing fundamentals - unit testing and integration testing practices Business Process (Basic Required) User requirements - ability to translate business needs into technical solutions Data quality - recognizing and handling dirty/inconsistent data Exception handling - designing workflows for edge cases and errors Professional Experience (Minimum) Software Engineering 3+ years total software development experience 1+ production AI project - any AI/ML system deployed to production (even simple ones) Cross-functional collaboration - worked with non-technical stakeholders Problem-solving - demonstrated ability to debug and resolve complex technical issues Communication & Collaboration Technical documentation - ability to write clear technical docs and code comments Stakeholder communication - explain technical concepts to business users Independent work - ability to work autonomously with minimal supervision Learning agility - quickly pick up new technologies and frameworks Educational Background (Any One) Formal Education Bachelor's degree in Computer Science, Engineering, or related technical field OR equivalent experience - demonstrable technical skills through projects/work Alternative Paths Coding bootcamp + 2+ years professional development experience Self-taught with strong portfolio of production projects Technical certifications (AWS, Google Cloud, etc.) + relevant experience [nice to have] Demonstrable Skills (Portfolio Requirements) Must Show Evidence Of One working AI application - GitHub repo or live demo of LLM integration Python projects - code samples showing API development and data processing Production deployment - any application currently running and serving users Problem-solving ability - examples of debugging complex issues or optimizing performance Nice to Have (Not Required) Financial services or fintech experience Vector databases (Pinecone, Weaviate) experience Docker/containerization knowledge Advanced ML/AI education or certifications How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 days ago

Apply

4.0 years

18 Lacs

jamshedpur, jharkhand, india

Remote

Experience : 4.00 + years Salary : INR 1800000.00 / year (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Suite Solvers) (*Note: This is a requirement for one of Uplers' client - An Atlanta based IT Services and IT Consulting Company) What do you need for this opportunity? Must have skills required: Docker, Vector Database, Fintech, Testing and deployment, Data Science, Artificial Intelligence (AI), Large Language Model APIs (LLM APIs), LLM APIs, Large Language Model (LLM), Prompt Engineering, FastAPI / Flask, Cloud An Atlanta based IT Services and IT Consulting Company is Looking for: About The Job SuiteSolvers is a boutique consulting firm that helps mid-market companies transform and scale through smart ERP implementations, financial automation, and operational strategy. We specialize in NetSuite and Acumatica, and we’re building tools that make finance and operations more intelligent and less manual. Our clients range from high-growth startups to billion-dollar enterprises. We’re hands-on, fast-moving, and results-driven—our work shows up in better decisions, faster closes, cleaner audits, and smarter systems. We’re not a bloated agency. We’re a small team with high standards. If you like solving real business problems with clean data pipelines, smart automation, and the occasional duct-tape hack that gets the job done—this might be your kind of place. We are looking for a Data Engineer. Essential Technical Skills AI/ML (Required) 2+ years hands-on experience with LLM APIs (OpenAI, Anthropic, or similar) Production deployment of at least one AI system that's currently running in production LLM framework experience with LangChain, CrewAI, or AutoGen (any one is sufficient) Function calling/tool use - ability to build AI systems that can call external APIs and functions Basic prompt engineering - understanding of techniques like Chain-of-Thought and ReAct patterns Python Development (Required) 3+ years Python development with strong fundamentals API development using Flask or FastAPI with proper error handling Async programming - understanding of async/await patterns for concurrent operations Database integration - working with PostgreSQL, MySQL, or similar relational databases JSON/REST APIs - consuming and building REST services Production Systems (Required) 2+ years building production software that serves real users Error handling and logging - building robust systems that handle failures gracefully Basic cloud deployment - experience with AWS, Azure, or GCP (any one platform) Git/version control - collaborative development using Git workflows Testing fundamentals - unit testing and integration testing practices Business Process (Basic Required) User requirements - ability to translate business needs into technical solutions Data quality - recognizing and handling dirty/inconsistent data Exception handling - designing workflows for edge cases and errors Professional Experience (Minimum) Software Engineering 3+ years total software development experience 1+ production AI project - any AI/ML system deployed to production (even simple ones) Cross-functional collaboration - worked with non-technical stakeholders Problem-solving - demonstrated ability to debug and resolve complex technical issues Communication & Collaboration Technical documentation - ability to write clear technical docs and code comments Stakeholder communication - explain technical concepts to business users Independent work - ability to work autonomously with minimal supervision Learning agility - quickly pick up new technologies and frameworks Educational Background (Any One) Formal Education Bachelor's degree in Computer Science, Engineering, or related technical field OR equivalent experience - demonstrable technical skills through projects/work Alternative Paths Coding bootcamp + 2+ years professional development experience Self-taught with strong portfolio of production projects Technical certifications (AWS, Google Cloud, etc.) + relevant experience [nice to have] Demonstrable Skills (Portfolio Requirements) Must Show Evidence Of One working AI application - GitHub repo or live demo of LLM integration Python projects - code samples showing API development and data processing Production deployment - any application currently running and serving users Problem-solving ability - examples of debugging complex issues or optimizing performance Nice to Have (Not Required) Financial services or fintech experience Vector databases (Pinecone, Weaviate) experience Docker/containerization knowledge Advanced ML/AI education or certifications How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 days ago

Apply

4.0 years

18 Lacs

ranchi, jharkhand, india

Remote

Experience : 4.00 + years Salary : INR 1800000.00 / year (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Suite Solvers) (*Note: This is a requirement for one of Uplers' client - An Atlanta based IT Services and IT Consulting Company) What do you need for this opportunity? Must have skills required: Docker, Vector Database, Fintech, Testing and deployment, Data Science, Artificial Intelligence (AI), Large Language Model APIs (LLM APIs), LLM APIs, Large Language Model (LLM), Prompt Engineering, FastAPI / Flask, Cloud An Atlanta based IT Services and IT Consulting Company is Looking for: About The Job SuiteSolvers is a boutique consulting firm that helps mid-market companies transform and scale through smart ERP implementations, financial automation, and operational strategy. We specialize in NetSuite and Acumatica, and we’re building tools that make finance and operations more intelligent and less manual. Our clients range from high-growth startups to billion-dollar enterprises. We’re hands-on, fast-moving, and results-driven—our work shows up in better decisions, faster closes, cleaner audits, and smarter systems. We’re not a bloated agency. We’re a small team with high standards. If you like solving real business problems with clean data pipelines, smart automation, and the occasional duct-tape hack that gets the job done—this might be your kind of place. We are looking for a Data Engineer. Essential Technical Skills AI/ML (Required) 2+ years hands-on experience with LLM APIs (OpenAI, Anthropic, or similar) Production deployment of at least one AI system that's currently running in production LLM framework experience with LangChain, CrewAI, or AutoGen (any one is sufficient) Function calling/tool use - ability to build AI systems that can call external APIs and functions Basic prompt engineering - understanding of techniques like Chain-of-Thought and ReAct patterns Python Development (Required) 3+ years Python development with strong fundamentals API development using Flask or FastAPI with proper error handling Async programming - understanding of async/await patterns for concurrent operations Database integration - working with PostgreSQL, MySQL, or similar relational databases JSON/REST APIs - consuming and building REST services Production Systems (Required) 2+ years building production software that serves real users Error handling and logging - building robust systems that handle failures gracefully Basic cloud deployment - experience with AWS, Azure, or GCP (any one platform) Git/version control - collaborative development using Git workflows Testing fundamentals - unit testing and integration testing practices Business Process (Basic Required) User requirements - ability to translate business needs into technical solutions Data quality - recognizing and handling dirty/inconsistent data Exception handling - designing workflows for edge cases and errors Professional Experience (Minimum) Software Engineering 3+ years total software development experience 1+ production AI project - any AI/ML system deployed to production (even simple ones) Cross-functional collaboration - worked with non-technical stakeholders Problem-solving - demonstrated ability to debug and resolve complex technical issues Communication & Collaboration Technical documentation - ability to write clear technical docs and code comments Stakeholder communication - explain technical concepts to business users Independent work - ability to work autonomously with minimal supervision Learning agility - quickly pick up new technologies and frameworks Educational Background (Any One) Formal Education Bachelor's degree in Computer Science, Engineering, or related technical field OR equivalent experience - demonstrable technical skills through projects/work Alternative Paths Coding bootcamp + 2+ years professional development experience Self-taught with strong portfolio of production projects Technical certifications (AWS, Google Cloud, etc.) + relevant experience [nice to have] Demonstrable Skills (Portfolio Requirements) Must Show Evidence Of One working AI application - GitHub repo or live demo of LLM integration Python projects - code samples showing API development and data processing Production deployment - any application currently running and serving users Problem-solving ability - examples of debugging complex issues or optimizing performance Nice to Have (Not Required) Financial services or fintech experience Vector databases (Pinecone, Weaviate) experience Docker/containerization knowledge Advanced ML/AI education or certifications How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies