Technical Architect (Java) Location: Noida/Pune Job Type: Full-Time Experience: 10+ Years Job Summary: We are looking for an experienced and hands-on Technical Architect to design and implement complex enterprise-level solutions. The ideal candidate will have deep expertise in Java technologies, cloud architecture, microservices, and a strong ability to guide development teams in best practices and scalable systems design. Key Responsibilities: Design scalable, maintainable, and high-performance Java-based systems Define architectural standards and enforce best coding practices Guide and mentor development teams through all stages of the SDLC Collaborate with cross-functional teams (Product, QA, DevOps) Evaluate tools, technologies, and frameworks for continuous improvement Required Skills: 10+ years of Java/J2EE development experience (3+ as an Architect) Expertise in Spring, Spring Boot, Microservices, REST APIs Strong knowledge of cloud platforms (AWS/Azure/GCP) Proficiency in SQL/NoSQL databases, CI/CD pipelines, Docker/Kubernetes Excellent communication, leadership, and problem-solving skills For more details or to apply, please email your updated resume to priti.rathore@varaisys.com
Role: DevOps Engineer Experience: 5+ Years Location: Noida, Uttar Pradesh Mode: Work from Office Role Overview We are looking for a DevOps Engineer with strong expertise in on-premise DevOps practices. The ideal candidate will have 5+ years of hands-on experience with tools like Jenkins, Terraform, Ansible, and Python, and will play a key role in automating infrastructure, managing CI/CD pipelines, and supporting our development and operations teams. This is an exciting opportunity for an engineer passionate about infrastructure automation, continuous integration, and system scalability. Key Responsibilities Infrastructure Automation: Design, implement, and maintain automated infrastructure solutions using Terraform and Ansible. CI/CD Pipeline Management: Build and maintain continuous integration and continuous deployment pipelines with Jenkins or similar tools. Infrastructure as Code (IaC): Automate the provisioning and management of cloud resources and on-premise infrastructure through IaC tools such as Terraform. Cloud Integration: Work with cloud platforms such as AWS, GCP, and Azure to integrate cloud-based infrastructure with on-premise systems. Scripting & Automation: Use Python or Bash scripting to automate tasks, processes, and configurations. Monitoring and Observability: Implement and manage observability tools to ensure system health, performance, and uptime. Collaboration: Collaborate with development, operations, and QA teams to streamline workflows and optimize the software delivery lifecycle. Troubleshooting: Diagnose and resolve infrastructure-related issues, ensuring high availability and reliability. Key Requirements Experience: Minimum of 5 years of hands-on experience in DevOps, primarily focused on on-premise infrastructure. Technical Skills Strong proficiency in Jenkins for managing CI/CD pipelines. Expertise in Terraform for infrastructure provisioning and automation. Experience with Ansible for configuration management and automation. Proficient in scripting languages like Python and Bash for automation and troubleshooting. Familiarity with cloud platforms such as AWS, GCP, or Azure and experience integrating cloud with on-premise systems. Solid understanding of Linux environments. Knowledge of observability tools (Prometheus, Grafana, ELK stack, etc.) to monitor and troubleshoot infrastructure. Skills: prometheus,devops,jenkins,azure,cicd,kubernetes,bash,ansible,terraform,gcp,aws,docker,python,linux,elk stack,grafana
Role: DevOps Engineer Experience: 5+ Years Location: Noida, Uttar Pradesh Mode: Work from Office Role Overview We are looking for a DevOps Engineer with strong expertise in on-premise DevOps practices. The ideal candidate will have 5+ years of hands-on experience with tools like Jenkins, Terraform, Ansible, and Python, and will play a key role in automating infrastructure, managing CI/CD pipelines, and supporting our development and operations teams. This is an exciting opportunity for an engineer passionate about infrastructure automation, continuous integration, and system scalability. Key Responsibilities Infrastructure Automation: Design, implement, and maintain automated infrastructure solutions using Terraform and Ansible. CI/CD Pipeline Management: Build and maintain continuous integration and continuous deployment pipelines with Jenkins or similar tools. Infrastructure as Code (IaC): Automate the provisioning and management of cloud resources and on-premise infrastructure through IaC tools such as Terraform. Cloud Integration: Work with cloud platforms such as AWS, GCP, and Azure to integrate cloud-based infrastructure with on-premise systems. Scripting & Automation: Use Python or Bash scripting to automate tasks, processes, and configurations. Monitoring and Observability: Implement and manage observability tools to ensure system health, performance, and uptime. Collaboration: Collaborate with development, operations, and QA teams to streamline workflows and optimize the software delivery lifecycle. Troubleshooting: Diagnose and resolve infrastructure-related issues, ensuring high availability and reliability. Key Requirements Experience: Minimum of 5 years of hands-on experience in DevOps, primarily focused on on-premise infrastructure. Technical Skills Strong proficiency in Jenkins for managing CI/CD pipelines. Expertise in Terraform for infrastructure provisioning and automation. Experience with Ansible for configuration management and automation. Proficient in scripting languages like Python and Bash for automation and troubleshooting. Familiarity with cloud platforms such as AWS, GCP, or Azure and experience integrating cloud with on-premise systems. Solid understanding of Linux environments. Knowledge of observability tools (Prometheus, Grafana, ELK stack, etc.) to monitor and troubleshoot infrastructure. Skills: prometheus,devops,jenkins,azure,cicd,kubernetes,bash,ansible,terraform,gcp,aws,docker,python,linux,elk stack,grafana
Job Title: Marketing Trainee (Fresher) Location: Noida, Uttar Pradesh Experience: 0–1 year Qualification: MBA in Marketing (candidates from Tier 2 colleges are strongly encouraged to apply) What you’ll do: Collaborate with product and tech teams to understand product features, USPs, and value propositions Assist in building go-to-market (GTM) strategies for our products and services Manage digital marketing campaigns: social media, SEO, content, and email marketing Create marketing collaterals: blogs, case studies, newsletters, presentations, and pitch decks Conduct market research and gather customer insights to guide strategies Track KPIs and provide actionable analytics to improve campaigns Support sales enablement with marketing content tailored for lead generation What we’re looking for: MBA in Marketing (candidates from Tier 2 colleges are strongly encouraged to apply) Excellent communication, storytelling, and presentation skills Creative, analytical, and proactive mindset Passion for technology products and IT services Why join us: Hands-on exposure to end-to-end product marketing Work closely with leadership and make a measurable impact Fast-paced environment with real-world learning opportunities 📩 Apply now: Send your CV to [ priti.rathore@varaisys.com ]
Job Role: Senior Marketing Specialist Location: Noida (Work From Office) Experience: 5 Years Role Overview We’re seeking a smart, strategic, and execution-driven marketing professional to build and lead our marketing function from scratch. If you’re someone who loves working on both services and product launches, has a growth-hacking mindset, and can wear multiple hats, this role is for you. Responsibilities Create and execute comprehensive go-to-market strategies for multiple SaaS/IT products. Define clear product positioning, messaging, and value propositions for target industries. Run digital marketing campaigns including SEO, SEM, paid ads, content marketing, and social media. Build marketing collateral, case studies, and sales enablement assets. Establish Varaisys’ brand presence across digital platforms, communities, and industry events. Track performance metrics, optimize campaigns, and drive consistent lead flow. Collaborate closely with leadership to align product and service marketing goals. Scale marketing efforts and contribute to building the long-term marketing roadmap. Requirements 5+ years of experience in B2B SaaS / IT product marketing with hands-on product launch expertise. Strong background in digital marketing, demand generation, and go-to-market planning . Prior experience marketing AI / GenAI / emerging tech products is highly preferred. Proven ability to deliver end-to-end campaigns that drive measurable results. Proficiency with tools such as HubSpot, Google Ads, LinkedIn Campaign Manager, SEO/analytics tools. Balance of strategic thinking and hands-on execution. Why Join Us? Lead the launch of market-ready tech products. Full ownership of marketing strategy and execution. Exposure to both IT services and SaaS product marketing. High-impact role with direct collaboration with leadership. 📩 𝐀𝐩𝐩𝐥𝐲 𝐧𝐨𝐰: Send your CV to 𝐩𝐫𝐢𝐭𝐢.𝐫𝐚𝐭𝐡𝐨𝐫𝐞@𝐯𝐚𝐫𝐚𝐢𝐬𝐲𝐬.𝐜𝐨𝐦
Job Role: Big Data Engineer Location: Noida (Work From Office) Experience: 5 Years Role Overview We’re looking for a Big Data Engineer who thrives on scale — the kind where terabytes of data move every single day . You’ll be building and optimizing high-throughput data pipelines that fuel analytics, AI, and business intelligence across the org. Key Responsibilities Design, develop, and optimize data pipelines processing terabytes of structured and unstructured data per day . Build real-time and batch data processing systems using Spark, Kafka, and Hadoop ecosystem tools. Manage data ingestion, transformation, and storage for analytics, ML, and reporting needs. Work with Data Scientists, ML Engineers, and Analysts to deliver scalable, production-ready datasets. Implement data quality, lineage, and observability frameworks to ensure reliability and consistency. Optimize ETL performance, cluster utilization, and query execution across distributed environments. Manage data workflows on the cloud (AWS, GCP, or Azure) for speed, cost, and scalability. Must-Have Skills Deep expertise with Apache Spark, Kafka, Hadoop (HDFS, Hive, YARN), Airflow . Strong programming skills in Python, Scala, or Java . Proven experience handling large-scale, high-velocity data (TBs/day) in production systems. Strong understanding of distributed computing, data partitioning, and cluster performance tuning . Hands-on with cloud-native data tools (AWS EMR, Glue, Redshift, GCP BigQuery, or Azure Databricks). Familiarity with data lake/lakehouse architectures and schema evolution . Good to have Experience with real-time streaming (Flink, Kinesis, Pulsar) and data lake formats (Delta, Iceberg, Hudi). Understanding of data governance, cataloging, and lineage . Exposure to ML pipelines or feature store engineering . 📩 𝐀𝐩𝐩𝐥𝐲 𝐧𝐨𝐰: Send your CV to 𝐩𝐫𝐢𝐭𝐢.𝐫𝐚𝐭𝐡𝐨𝐫𝐞@𝐯𝐚𝐫𝐚𝐢𝐬𝐲𝐬.𝐜𝐨𝐦
Job Role: Product Marketing Manager Location: Noida (Work From Office) Experience: 5+ Years Role Overview We’re seeking a smart, strategic, and execution-driven marketing professional to build and lead our marketing function from scratch. If you’re someone who loves working on both services and product launches, has a growth-hacking mindset, and can wear multiple hats, this role is for you. Responsibilities Create and execute comprehensive go-to-market strategies for multiple SaaS/IT products. Define clear product positioning, messaging, and value propositions for target industries. Run digital marketing campaigns including SEO, SEM, paid ads, content marketing, and social media. Build marketing collateral, case studies, and sales enablement assets. Establish Varaisys’ brand presence across digital platforms, communities, and industry events. Track performance metrics, optimize campaigns, and drive consistent lead flow. Collaborate closely with leadership to align product and service marketing goals. Scale marketing efforts and contribute to building the long-term marketing roadmap. Requirements 5+ years of experience in B2B SaaS / IT product marketing with hands-on product launch expertise. Strong background in digital marketing, demand generation, and go-to-market planning . Prior experience marketing AI / GenAI / emerging tech products is highly preferred. Proven ability to deliver end-to-end campaigns that drive measurable results. Proficiency with tools such as HubSpot, Google Ads, LinkedIn Campaign Manager, SEO/analytics tools. Balance of strategic thinking and hands-on execution. 📩 𝐀𝐩𝐩𝐥𝐲 𝐧𝐨𝐰: Send your CV to 𝐩𝐫𝐢𝐭𝐢.𝐫𝐚𝐭𝐡𝐨𝐫𝐞@𝐯𝐚𝐫𝐚𝐢𝐬𝐲𝐬.𝐜𝐨𝐦
Job Role: Data Scientist Company: Varaisys Private Limited Location: Noida (On-site) Experience: 4+ Years About the Role: We are looking for an experienced Data Scientist to work on enterprise-scale datasets and develop advanced machine learning and deep learning models . This role is hands-on, technically challenging, and ideal for someone ready to take ownership of complex analytics projects Key Responsibilities: Develop, train, and deploy machine learning and deep learning models including RNNs, LSTMs, GANs, reinforcement learning, and other advanced algorithms. Analyze high-volume, complex datasets to identify patterns, trends, anomalies, and actionable insights. Apply statistical modeling techniques such as regression, Bayesian analysis, clustering, and predictive modeling to solve business problems. Collaborate with cross-functional teams to implement enterprise-grade data solutions that support strategic decision-making. Conduct model evaluation, validation, and optimization , including hyperparameter tuning and performance monitoring. Visualize and present findings effectively to non-technical and technical stakeholders using dashboards and reports. Stay updated with emerging trends in machine learning, deep learning, AI, and statistical modeling , and recommend innovative solutions for business challenges. Ensure that models and analytics practices follow ethical AI principles and best practices . Required Qualifications: Bachelor’s or Master’s degree in Statistics, Applied Mathematics, Computer Science, or a related quantitative field . 4–8 years of experience in data science, predictive analytics, or applied statistics . Strong proficiency in Python or R , and experience with ML/DL libraries such as Scikit-Learn, TensorFlow, PyTorch, Statsmodels, or Prophet . Expertise in statistical modeling, regression, Bayesian methods, predictive analytics, and deep learning techniques . Hands-on experience working with large-scale enterprise datasets . Familiarity with big data platforms and cloud-based analytics workflows is preferred. Strong analytical thinking, problem-solving skills, and the ability to communicate complex concepts clearly to stakeholders.
Experience: 5+ Years Location: Singapore (This is a Singapore-based, full-time onsite role , and we’re open to candidates currently in India who are willing to relocate.) About the Role We’re looking for a skilled DevOps Engineer to design, build, and optimize scalable CI/CD pipelines and cloud-native infrastructure. You’ll collaborate closely with cross-functional teams to accelerate innovation, improve release efficiency, and ensure high system reliability. Key Responsibilities CI/CD & Pipeline Management: Design, implement, and optimize CI/CD pipelines for continuous integration and delivery across multiple teams. Cloud-Native Infrastructure: Deploy and manage containerized applications using Kubernetes, Docker, and Helm in cloud environments like Azure . Automation & “Everything-as-Code”: Drive automation in deployment, configuration, and operations using tools like Terraform and Octopus Deploy . Monitoring & Observability: Implement and manage monitoring tools such as Prometheus, Grafana, Dynatrace , and ELK Stack for proactive issue detection. POC & Tech Leadership: Lead Proofs of Concept for emerging technologies, fostering continuous improvement and modernization. Collaboration & Ownership: Work closely with internal and external stakeholders to deliver features, resolve issues, and ensure timely sprint deliveries. Required Skills Strong expertise in CI/CD , containerization (Docker/K8s) , and cloud infrastructure (Azure, AWS, or GCP) . Hands-on experience with GitLab , Jenkins , or Azure DevOps . Familiarity with Terraform , Octopus Deploy , and automation frameworks . Scripting in PowerShell , Bash , or Python (nice to have). Strong analytical and problem-solving mindset with a proactive, team-oriented attitude. Excellent communication and collaboration skills. Bachelor’s or Master’s in Computer Science or related field. Tech Environment Scripting: PowerShell, Bash, Python (nice to have) CI/CD Tools: GitLab, Jenkins, Azure DevOps Containers: Kubernetes, Docker, Helm Cloud: Azure, AWS, Google Cloud Monitoring: Prometheus, Grafana, Dynatrace, ELK Stack, Splunk Database: SQL Server, Oracle Deployment: Octopus Deploy, Terraform Version Control: Git Networking: Ingress, Load Balancers (nice to have)
We are seeking a highly skilled Data Scientist with expertise in statistics , time series analysis , and NLP to join our dynamic team. The ideal candidate will have a strong background in advanced statistical techniques, machine learning algorithms, and data-driven forecasting. This role involves identifying patterns, trends, and anomalies in complex datasets and applying statistical modeling to solve business challenges and drive innovation. Key Responsibilities: Apply advanced statistical methods to analyze large and complex datasets and extract meaningful insights. Design and implement time series analysis techniques for forecasting and trend detection, leveraging models such as ARIMA, SARIMA. Develop, test, and deploy machine learning models for pattern detection, anomaly identification, and predictive analytics. Create scalable forecasting models to address real-world business problems across various domains. Perform hypothesis testing and statistical validation to ensure reliability and accuracy of findings. Clean, preprocess, and validate data to improve its usability and integrity for analysis. Visualize and interpret analytical findings to communicate actionable insights to non-technical stakeholders effectively. Collaborate with cross-functional teams to implement data-driven solutions tailored to business challenges. Continuously research and implement the latest advancements in statistics , AI , and time series modeling to enhance analytics capabilities. Required Qualifications: Bachelor’s or Master’s degree in Statistics , Applied Mathematics , or a related quantitative field. 4 years of experience in data science, with a strong focus on statistical analysis , time series modeling , and NLP. Expertise in time series analysis , including ARIMA , SARIMA , Holt-Winters , and other forecasting techniques . Proficiency in programming languages like Python , R , or similar, with experience using libraries such as statsmodels , Prophet , Scikit-Learn , and NumPy . Strong understanding of statistical concepts , including regression analysis , Bayesian methods , and hypothesis testing . Experience with data visualization tools such as Tableau , PowerBI , Matplotlib , or Seaborn for presenting insights. Hands-on experience with data manipulation tools such as SQL , Pandas , and Excel . Excellent problem-solving and critical-thinking skills, with the ability to simplify complex statistical concepts for non-technical audiences.