Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
At Tarana, you will help build a cutting-edge cloud product -- a management system for wireless networks, scaling to millions of devices -- using modern cloud-native architecture and open-source technologies. You will be responsible for designing and implementing distributed software in a microservices architecture. This could include everything from requirements gathering (working with Product Management and customers) to high-level design to implementation, integrations, operations, troubleshooting, performance tuning and scaling. You will work as a key member of an R&D team that owns one or more services, end-to-end. There will be PoCs, customer pilots, and production releases, all in an agile engineering environment. Expect to be challenged and stretch your skills on a daily basis. Expect to meet or beat exacting standards of quality and performance. We will provide the right mentoring to make sure that you can succeed. The job is based in Pune, and this job profile will require in-person presence in the office. Required Skills & Experience: Bachelor’s degree (or higher) in Computer Science or a closely-related field, from a reputed university (Tier1/Tier2) At least 10+ years of experience in backend software development, in product companies or tech startups Experience with building SaaS/IoT product offerings will be a plus Software development in Java and its associated ecosystem (e.g., Spring Boot, Hibernate, etc.) Microservices and RESTful APIs: implementation and consumption Conceptual knowledge of distributed systems -- clustering, asynchronous messaging, streaming, scalability & performance, data consistency, high availability, etc. -- would be a big plus Good understanding of databases (relational, NoSQL) and caching. Experience on any time series database will be a plus. Experience with distributed messaging systems like Kafka/confluent or kinesis or google pub/sub would be a plus Experience with cloud-native platforms like Kubernetes will be a big plus Working knowledge of network protocols (TCP/IP, HTTP) and standard network architectures, RPC mechanisms (e.g., gRPC) Since our founding in 2009, we’ve been on a mission to accelerate the pace of bringing fast and affordable internet access — and all the benefits it provides — to the 90% of the world’s households who can’t get it. Through a decade of R&D and more than $400M of investment, we’ve created an entirely unique next-generation fixed wireless access technology, powering our first commercial platform, Gigabit 1 (G1). It delivers a game-changing advance in broadband economics in both mainstream and underserved markets, using either licensed or unlicensed spectrum. G1 started production in mid 2021 and has now been installed by over 160 service providers globally. We’re headquartered in Milpitas, California, with additional research and development in Pune, India. G1 has been developed by an incredibly talented and pioneering core technical team. We are looking for more world-class problem solvers who can carry on our tradition of customer obsession and ground-breaking innovation. We’re well funded, growing incredibly quickly, maintaining a superb results-focused culture while we’re at it, and all grooving on the positive difference we are making for people all over the planet. If you want to help make a real difference in this world, apply now! Show more Show less
Posted 3 days ago
4.0 years
0 Lacs
Navi Mumbai, Maharashtra, India
On-site
📍 Location : Vashi | 💼 Salary : ₹30,000–₹40,000/month About the Role ASBS MBA is hiring a technically strong and self-reliant Website & SEO Specialist to take full ownership of our website and organic performance. You’ll manage SEO strategy, implement website updates, and track user behavior—all while being the sole point of contact for web and SEO operations . Use of AI tools , prompt engineering , and setup and usage of GA4 are essential for this role. Key Responsibilities SEO Strategy & Content Optimization Perform keyword research, content planning, and clustering Optimize pages using AI tools (ChatGPT, SurferSEO, NeuronWriter) for scalable content execution Apply prompt engineering to generate SEO-rich content, meta tags, and FAQs Implement on-page SEO: titles, headers, interlinking, schema markup Publish & update landing pages, course info, and blog content Technical SEO & Website Management Conduct technical audits: Core Web Vitals, site speed, indexing, crawlability Handle structured data, XML sitemaps, robots.txt, redirects Independently manage WordPress site—build/edit pages, install plugins, fix layout issues Set up and troubleshoot lead forms, popups, CTAs, and plugins No separate tech team—this is a hands-on technical role Analytics & Reporting Setup and usage of GA4 is essential : implement events, goals, and user tracking Analyze user journeys, bounce rates, and conversions via GA4 Use GA4, GSC, and GTM to inform landing page improvements and CRO Monitor organic performance using SEMrush, Ahrefs, and Screaming Frog Generate monthly performance reports with actionables Requirements 2–4 years of experience in SEO + website management Strong WordPress skills + basic HTML/CSS Proficiency in SEO tools (Ahrefs, SEMrush, Screaming Frog, GSC) Must be able to set up and use GA4 independently Experience using AI tools and prompt engineering for SEO at scale Ability to own all web and SEO functions without developer support Experience in education/lead-gen websites is preferred The ideal candidate will analyze, review, and implement changes to websites so they are optimized for search engines. This candidate will be able to implement actionable strategies that will improve site visibility. Responsibilities Review and analyze client sites for areas needing improvement Prepare detailed strategy reports Create and launch campaigns Improve clients 'rank' in major search engines Qualifications Bachelor's degree in Information Technology or related field 3+ years' of technical experience Strong analytical skills Understanding of all search engines and functions as well as marketing Show more Show less
Posted 3 days ago
5.0 - 7.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
We are looking for an experienced Supply Chain Manager with 5-7 years of experience for Gurgaon to lead our Demand Planning and Sales & Operations Planning (S&OP) processes. Your role will involve managing forecasting techniques, optimizing inventory management, and using analytics to support decision-making. The ideal candidate should have expertise in supply chain operations, demand forecasting, S&OP, and leveraging analytics to improve supply chain effectiveness. The candidate must also possess strong skills in R, Python, and advanced analytics. 🔧 Key Responsibilities Demand Planning & Forecasting: Build and maintain statistical forecasting models using R, Python, or relevant tools . Integrate external data (market trends, sales data, macroeconomic indicators) for robust forecasts. Collaborate with Sales, Marketing, and Finance to align on forecasted volumes and business assumptions. S&OP Management: Drive monthly and quarterly S&OP cycles, ensuring cross-functional alignment on demand, inventory, and supply. Create simulation models to evaluate supply scenarios, lead times, and capacity constraints. Advanced Analytics & Reporting: Apply machine learning and time-series techniques to optimize forecast accuracy and inventory performance. Automate dashboards and reports using Python, R, Power BI , or similar BI tools. Deliver insights on performance metrics (e.g., forecast accuracy, inventory turns, service levels). Inventory Optimization: Develop algorithms or decision support models to minimize excess and obsolete inventory. Balance cost, service levels, and working capital using predictive modeling. 🎯 Requirements Bachelor’s degree in Engineering, Statistics, Supply Chain, or a related field (Master’s preferred). 5–7 years of experience in supply chain roles, especially in demand forecasting and S&OP . Strong hands-on skills in R and/or Python for statistical analysis, data wrangling, and model development. Advanced knowledge of analytics and visualization tools ( Excel, Power BI, Tableau , etc.). Experience with ERP and planning systems (SAP, Oracle, NetSuite, JDA, etc.). Familiarity with concepts like time-series modeling, regression, clustering, or simulation modeling. Strong business acumen and communication skills for cross-functional collaboration. Show more Show less
Posted 3 days ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Data Analyst - Python Job Classification : Full-Time Work Location : Work-From-Office (Hyderabad) Education : BE-BCS, B-Tech-IT, MCA or Equivalent Experience Level : 3 Years (2+ years’ Data Analysis experience) Company Description Team Geek Solutions (TGS) is a global technology partner based in Texas, specializing in AI and Generative AI solutions, custom software development, and talent optimization. TGS offers a range of services tailored to industries like BFSI, Telecom, FinTech, Healthcare, and Manufacturing. With expertise in AI/ML development, cloud migration, software development, and more, TGS helps businesses achieve operational efficiency and drive innovation. Position Description We are looking for a Data Analyst to analyze large amounts of raw information to find patterns that will help improve our products. We will rely on you to build data models to extract valuable business insights. In this role, you should be highly analytical with a knack for analysis, math and statistics. Your task is to gather and prepare data from multiple sources, run statistical analyses, and communicate your findings in a clear and objective way. Your goal will be to help our company analyze trends to make better decisions. Qualifications/Skills Required 2+years’ experience in Python, with knowledge of packages such as pandas, NumPy, SciPy, Scikit-learn, Flask Proficiency in at least one data visualization tool, such as Matplotlib, Seaborn and Plotly Experience with popular statistical and machine learning techniques, such as clustering, SVM, KNN, decision trees, etc. Experience with databases, such as SQL and MongoDB Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy Knowledge of python libraries OpenCVand TensorFlow is a plus Job Responsibilities /Essential Functions Identify, analyze, and interpret trends or patterns in complex data sets Explore and visualize data Use machine learning tools to select features, create and optimize classifiers Clearly communicate the findings from the analysis to turn information into something actionable through reports, dashboards, and/or presentations. Skills: pandas,business insights,numpy,plotly,scipy,python,mongodb,analytical skills,tensorflow,statistics,machine learning,sql,data,opencv,seaborn,data visualization,matplotlib,scikit-learn,flask Show more Show less
Posted 3 days ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
https://forms.office.com/r/JT9GG2968G Kindly fill the form. The profiles will be considered based only on the responses in the forms. Summary We are seeking a highly skilled and experienced DBA to join our expanding Information Technology team. In this role, you will help develop and design technology solutions that are scalable, relevant, and critical to our company’s success. You will join the team working on our new platform being built using MS SQL Server and MYSQL Server . You will participate in all phases of the development lifecycle, implementation, maintenance and support and must have a solid skill set, a desire to continue to grow as a Database Administrator, and a team-player mentality. Key Responsibilities 1. Primary responsibility will be the management of production databases servers, including security, deployment, maintenance and performance monitoring. 2. Setting up SQL Server replication, mirroring and high availability as would be required across hybrid environments. 3. Design and implementation of new installations, on Azure, AWS and cloud hosting with no specific DB services. 4. Deploy and maintain on premise installations of SQL Server on Linux/ MySQL installation. 5. Database security and protection against SQL injection, exploiting of intellectual property, etc., 6. To work with development teams assisting with data storage and query design/optimization where required. 7. Participate in the design and implementation of essential applications. 8. Demonstrate expertise and add valuable input throughout the development lifecycle. 9. Help design and implement scalable, lasting technology solutions. 10. Review current systems, suggesting updates as would be required. 11. Gather requirements from internal and external stakeholders. 12. Document procedures to setup and maintain a highly available SQL Server database on Azure cloud, on premise and Hybrid environments. 13. Test and debug new applications and updates 14. Resolve reported issues and reply to queries in a timely manner. 15. Remain up to date on all current best practices, trends, and industry developments. 17. Identify potential challenges and bottlenecks in order to address them proactively. Key Competencies/Skillsets SQL Server management on Hybrid environments (on premise and cloud, preferably, Azure, AWS) MySQL Backup, SQL Server Backup, Replication, Clustering, Log shipping experience on Linux/ Windows. Setting up, management and maintenance of SQL Server/ MySQL on Linux. Experience with database usage and management Experience in implementing Azure Hyperscale database Experience in Financial Services / E-Commerce / Payments industry preferred. Familiar with multi-tier, object-oriented, secure application design architecture Experience in cloud environments preferably Microsoft Azure on Database service tiers. Experience of PCI DSS a plus SQL development experience is a plus Linux experience is a plus Proficient in using issue tracking tools like Jira, etc. Proficient in using version control systems like Git, SVN etc. Strong understanding of web-based applications and technologies Sense of ownership and pride in your performance and its impact on company’s success Critical thinker and problem-solving skills Excellent communication skills and ability to communicate with client’s via different modes of communication email, phone, direct messaging, etc Preferred Education and Experience 1. Bachelor’s degree in computer science or related field 2. Minimum 3 years’ experience as SQL Server DBA and as MySQL DBA and 2 + years of experience as MySQL DBA including Replication, InnoDB Cluster, Upgrading and Patching. 3. Ubuntu Linux knowledge is perferred. 3. MCTS, MCITP, and/or MVP/ Azure DBA/MySQL certifications a plus Show more Show less
Posted 3 days ago
0 years
0 Lacs
India
Remote
AI and Machine Learning Intern Company: INLIGHN TECH Location: Remote (100% Virtual) Duration: 3 Months Stipend for Top Interns: ₹15,000 Certificate Provided | Letter of Recommendation | Full-Time Offer Based on Performance About the Company: INLIGHN TECH empowers students and fresh graduates with real-world experience through hands-on, project-driven internships. The AI and Machine Learning Internship is crafted to provide practical exposure to building intelligent systems, enabling interns to bridge theoretical knowledge with real-world applications. Role Overview: As an AI and Machine Learning Intern, you will work on projects involving data preprocessing, model development, and performance evaluation. This internship will strengthen your skills in algorithm design, model optimization, and deploying AI solutions to solve real-world problems. Key Responsibilities: Collect, clean, and preprocess datasets for training machine learning models Implement machine learning algorithms for classification, regression, and clustering Develop deep learning models using frameworks like TensorFlow or PyTorch Evaluate model performance using metrics such as accuracy, precision, and recall Collaborate on AI-driven projects, such as chatbots, recommendation engines, or prediction systems Document code, methodologies, and results for reproducibility and knowledge sharing Qualifications: Pursuing or recently completed a degree in Computer Science, Data Science, Artificial Intelligence, or a related field Strong foundation in Python and understanding of libraries such as Scikit-learn, NumPy, Pandas, and Matplotlib Familiarity with machine learning concepts like supervised and unsupervised learning Experience or interest in deep learning frameworks (TensorFlow, Keras, PyTorch) Good problem-solving skills and a passion for AI innovation Eagerness to learn and contribute to real-world ML applications Internship Benefits: Hands-on experience with real-world AI and ML projects Certificate of Internship upon successful completion Letter of Recommendation for top performers Build a strong portfolio of AI models and machine learning solutions Show more Show less
Posted 3 days ago
0 years
0 Lacs
India
Remote
Data Science Intern Company: INLIGHN TECH Location: Remote (100% Virtual) Duration: 3 Months Stipend for Top Interns: ₹15,000 Certificate Provided | Letter of Recommendation | Full-Time Offer Based on Performance About the Company: INLIGHN TECH empowers students and fresh graduates with real-world experience through hands-on, project-driven internships. The Data Science Internship is designed to equip you with the skills required to extract insights, build predictive models, and solve complex problems using data. Role Overview: As a Data Science Intern, you will work on real-world datasets to develop machine learning models, perform data wrangling, and generate actionable insights. This internship will help you strengthen your technical foundation in data science while working on projects that have a tangible business impact. Key Responsibilities: Collect, clean, and preprocess data from various sources Apply statistical methods and machine learning techniques to extract insights Build and evaluate predictive models for classification, regression, or clustering tasks Visualize data using libraries like Matplotlib, Seaborn, or tools like Power BI Document findings and present results to stakeholders in a clear and concise manner Collaborate with team members on data-driven projects and innovations Qualifications: Pursuing or recently completed a degree in Data Science, Computer Science, Mathematics, or a related field Proficiency in Python and data science libraries (NumPy, Pandas, Scikit-learn, etc.) Understanding of statistical analysis and machine learning algorithms Familiarity with SQL and data visualization tools or libraries Strong analytical, problem-solving, and critical thinking skills Eagerness to learn and apply data science techniques to solve real-world problems Internship Benefits: Hands-on experience with real datasets and end-to-end data science projects Certificate of Internship upon successful completion Letter of Recommendation for top performers Build a strong portfolio of data science projects and models Show more Show less
Posted 3 days ago
3.0 years
0 Lacs
India
Remote
Job Description: AI Engineer and Gen AI Developer (SEO Scientist) Position Title: AI Engineer and Gen AI Developer (SEO Scientist) Location: Remote (lndia) Employment Type: Full-Time Reports To: Head of AI Development / CTO Company Overview: We are a cutting-edge technology company revolutionizing digital marketing through AI-driven solutions. Our mission is to empower businesses with advanced SEO strategies powered by artificial intelligence, leveraging agentic AI systems to optimize search engine performance, enhance content strategies, and drive organic traffic. We are seeking a talented AI Engineer and Gen AI Developer (SEO Scientist) to join our team and lead the development of our SEO-focused AI agentic platform. Job Summary: As an AI Engineer and Gen AI Developer (SEO Scientist), you will design, develop, and deploy AI-powered solutions tailored to search engine optimization (SEO). You will leverage your expertise in Generative AI, LangChain, and agentic AI systems to build intelligent SEO agents capable of automating keyword research, content optimization, link-building strategies, and performance analysis. This role requires a deep understanding of SEO principles, machine learning, natural language processing (NLP), and the ability to create scalable, innovative AI solutions in a remote work environment. Key Responsibilities: AI Agent Development: Design and develop AI agents using LangChain and other frameworks to automate SEO tasks such as keyword discovery, content generation, and on-page optimization. Implement agentic AI systems that can autonomously analyze search engine algorithms, competitor strategies, and user intent to deliver actionable SEO insights. Integrate generative AI models (e.g., LLMs like GPT, BERT, or custom models) to create high-quality, SEO-optimized content at scale. SEO Strategy and Optimization: Collaborate with SEO specialists to understand search engine algorithms (e.g., Google, Bing) and translate them into AI-driven strategies. Develop algorithms to identify high-value keywords, optimize meta tags, and enhance content relevance for improved search rankings. Use AI to analyze website performance metrics (e.g., bounce rate, dwell time, click-through rates) and recommend data-driven optimizations. Generative AI and NLP: Build and fine-tune generative AI models for content creation, including blog posts, product descriptions, and landing pages, ensuring alignment with SEO best practices. Leverage NLP techniques to perform semantic analysis, topic modeling, and intent detection to enhance content relevance and user engagement. Utilize LangChain to create workflows that combine LLMs with external tools (e.g., SERP APIs, Google Analytics) for real-time SEO insights. Data Integration and Analysis: Integrate AI systems with SEO tools (e.g., Ahrefs, SEMrush, Moz) and web analytics platforms (e.g., Google Analytics, Search Console) to gather and process data. Develop data pipelines to collect, clean, and analyze large datasets related to user behavior, search trends, and website performance. Use machine learning to predict SEO trends and recommend proactive strategies. Model Training and Optimization: Train and fine-tune machine learning models for tasks such as keyword clustering, link-building prioritization, and content gap analysis. Optimize AI models for performance, scalability, and cost-efficiency in cloud environments (e.g., AWS, GCP, Azure). Implement A/B testing and experimentation frameworks to validate AI-driven SEO strategies. Collaboration and Innovation: Work closely with cross-functional teams, including SEO specialists, content creators, and product managers, to align AI solutions with business goals. Stay updated on the latest advancements in AI, NLP, and SEO to incorporate cutting-edge techniques into the platform. Contribute to the development of proprietary AI tools and frameworks for SEO automation. Documentation and Reporting: Document AI model architectures, workflows, and SEO strategies for internal knowledge sharing and compliance. Provide regular reports on SEO performance metrics, AI model accuracy, and project milestones to stakeholders. Present findings and recommendations to non-technical teams in a clear and actionable manner. Qualifications: Education: Bachelor’s or Master’s degree in Computer Science, Data Science, Artificial Intelligence, or a related field. Experience: 3+ years of experience in AI engineering, machine learning, or software development. 2+ years of experience with Generative AI, NLP, and frameworks like LangChain, Hugging Face, or TensorFlow. 1+ years of experience in SEO, digital marketing, or related fields, with a strong understanding of search engine algorithms. Proven experience building AI agents or automation workflows for real-world applications. Technical Skills: Proficiency in Python, R, or similar programming languages. Expertise in LangChain for building AI workflows and agentic systems. Familiarity with LLMs (e.g., GPT, BERT, T5) and fine-tuning techniques. Experience with cloud platforms (AWS, GCP, Azure) and APIs (e.g., Google Search API, SERP APIs). Knowledge of SEO tools (e.g., Ahrefs, SEMrush, Moz, Screaming Frog) and analytics platforms (e.g., Google Analytics, Search Console). Understanding of machine learning frameworks (e.g., TensorFlow, PyTorch, Scikit-learn). Soft Skills: Strong problem-solving and analytical skills. Excellent communication skills to collaborate with remote teams and present complex ideas to non-technical stakeholders. Ability to work independently and manage multiple projects in a fast-paced, remote environment. Preferred Qualifications: Experience with agentic AI frameworks and autonomous systems. Knowledge of web development (HTML, CSS, JavaScript) for on-page SEO optimization. Familiarity with data visualization tools (e.g., Tableau, Power BI) for reporting SEO performance. What We Offer: Competitive salary and performance-based bonuses. Fully remote work with flexible hours. Access to cutting-edge AI tools and resources, including xAI’s Grok 3 (via grok.com or X platform, subject to usage quotas). Opportunities for professional growth and contributions to innovative AI projects. Collaborative and inclusive team culture. Show more Show less
Posted 3 days ago
6.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Position: Mainframe MQ Administrator Location: Mumbai/Bangalore Role Responsibilities Administer and maintain MQ middleware for production systems. Configure, monitor, and optimize queue managers. Ensure system security by implementing best practices. Provide support for MQ-related incidents and issues. Develop and enforce backup and disaster recovery strategies. Collaborate with cross-functional teams for application deployment. Troubleshoot performance issues and optimize message flow. Document system configurations and operational procedures. Conduct root cause analysis for MQ outages. Participate in on-call support as needed. Train staff on MQ processes and best practices. Keep up to date with the latest MQ updates and enhancements. Assist in the development of migration plans for new MQ versions. Provide regular reports on system performance and status. Support app developers in utilizing MQ functionality effectively. Qualifications Bachelor’s degree in Computer Science, Information Technology, or a related field. Minimum 6 years of experience as a Mainframe MQ Administrator. Strong knowledge of IBM MQ and related technologies. Experience with high-availability systems and clustering. Familiarity with scripting languages for automation tasks. Proven ability to troubleshoot and resolve complex issues. Excellent organizational and project management skills. Strong verbal and written communication abilities. Ability to work independently and as part of a team. Knowledge of security best practices related to MQ. Understanding of network protocols and configurations. Ability to manage time and prioritize tasks effectively. Willingness to learn new technologies as needed. Certification in IBM MQ is a plus. Strong analytical and problem-solving skills. Previous experience in a financial or healthcare environment is desirable. If you are a driven and skilled Mainframe MQ Administrator looking to take your career to the next level, we encourage you to apply for this exciting opportunity! Skills: network protocols,troubleshooting,security best practices,mainframe,project management,team collaboration,ibm mq,communication skills,clustering,performance tuning,scripting languages,mq,high-availability systems,security protocols,organizational skills Show more Show less
Posted 3 days ago
3.0 - 10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Experience 3 to 10 Years Required Qualifications: Data Engineering Skills 3–5 years of experience in data engineering, with hands-on experience in Snowflake and basic to intermediate proficiency in dbt. Capable of building and maintaining ELT pipelines using dbt and Snowflake, with guidance on architecture and best practices. Understanding of ELT principles and foundational knowledge of data modeling techniques (preferably Kimball/Dimensional). Intermediate experience with SAP Data Services (SAP DS), including extracting, transforming, and integrating data from legacy systems. Proficient in SQL for data transformation and basic performance tuning in Snowflake (e.g., clustering, partitioning, materializations). Familiar with workflow orchestration tools like dbt Cloud, Airflow, or Control M. Experience using Git for version control and exposure to CI/CD workflows in team environments. Exposure to cloud storage solutions such as Azure Data Lake, AWS S3, or GCS for ingestion and external staging in Snowflake. Working knowledge of Python for basic automation and data manipulation tasks. Understanding of Snowflake's role-based access control (RBAC), data security features, and general data privacy practices like GDPR. Key Responsibilities Design and build robust ELT pipelines using dbt on Snowflake, including ingestion from relational databases, APIs, cloud storage, and flat files. Reverse-engineer and optimize SAP Data Services (SAP DS) jobs to support scalable migration to cloud-based data platforms. Implement layered data architectures (e.g., staging, intermediate, mart layers) to enable reliable and reusable data assets. Enhance dbt/Snowflake workflows through performance optimization techniques such as clustering, partitioning, query profiling, and efficient SQL design. Use orchestration tools like Airflow, dbt Cloud, and Control-M to schedule, monitor, and manage data workflows. Apply modular SQL practices, testing, documentation, and Git-based CI/CD workflows for version-controlled, maintainable code. Collaborate with data analysts, scientists, and architects to gather requirements, document solutions, and deliver validated datasets. Contribute to internal knowledge sharing through reusable dbt components and participate in Agile ceremonies to support consulting delivery. Skills: workflow orchestration,git,airflow,sql,gcs,elt pipelines,azure data lake,data modeling,ci/cd,dbt,cloud storage,ci,snowflake,data security,python,sap data services,data engineering,aws s3 Show more Show less
Posted 3 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description In this role you'll be responsible for building machine learning based systems and conduct data analysis that improves the quality of our large geospatial data. You’ll be developing NLP models to extract information, using outlier detection to identifying anomalies and applying data science methods to quantify the quality of our data. You will take part in the development, integration, productionisation and deployment of the models at scale, which would require a good combination of data science and software development. Responsibilities - Development of machine learning models - Building and maintaining software development solutions - Provide insights by applying data science methods - Take ownership of delivering features and improvements on time Must-have Qualifications - 5+ years experience - Senior data scientist preferable with knowledge of NLP - Strong programming skills and extensive experience with Python - Professional experience working with LLMs, transformers and open-source models from HuggingFace - Professional experience working with machine learning and data science, such as classification, feature engineering, clustering, anomaly detection and neural networks - Knowledgeable in classic machine learning algorithms (SVM, Random Forest, Naive Bayes, KNN etc). - Experience using deep learning libraries and platforms, such as PyTorch - Experience with frameworks such as Sklearn, Numpy, Pandas, Polars - Excellent analytical and problem solving skills - Excellent oral and written communication skills Extra Merit Qualifications - Knowledge in at least one of the following: NLP, information retrieval, data mining - Ability to do statistical modeling and building predictive models - Programming skills and experience with Scala and/or Java Show more Show less
Posted 3 days ago
4.0 - 6.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Title: Sr. Data Engineer Location: Office-Based (Ahmedabad, India) About Hitech Hitech is a leading provider of Data, Engineering Services, and Business Process Solutions. With robust delivery centers in India and global sales offices in the USA, UK, and the Netherlands, we enable digital transformation for clients across industries including Manufacturing, Real Estate, and e-Commerce. Our Data Solutions practice integrates automation, digitalization, and outsourcing to deliver measurable business outcomes. We are expanding our engineering team and looking for an experienced Sr. Data Engineer to design scalable data pipelines, support ML model deployment, and enable insight-driven decisions. Position Summary We are seeking a Data Engineer / Lead Data Engineer with deep experience in data architecture, ETL pipelines, and advanced analytics support. This role is crucial for designing robust pipelines to process structured and unstructured data, integrate ML models, and ensure data reliability. The ideal candidate will be proficient in Python, R, SQL, and cloud-based tools, and possess hands-on experience in creating end-to-end data engineering solutions that support data science and analytics teams. Key Responsibilities Design and optimize data pipelines to ingest, transform, and load data from diverse sources. Build programmatic ETL pipelines using SQL and related platforms. Understand complex data structures and perform data transformation effectively. Develop and support ML models such as Random Forest, SVM, Clustering, Regression, etc. Create and manage scalable, secure data warehouses and data lakes. Collaborate with data scientists to structure data for analysis and modeling. Define solution architecture for layered data stacks ensuring high data quality. Develop design artifacts including data flow diagrams, models, and functional documents. Work with technologies such as Python, R, SQL, MS Office, and SageMaker. Conduct data profiling, sampling, and testing to ensure reliability. Collaborate with business stakeholders to identify and address data use cases. Qualifications & Experience 4 to 6 years of experience in data engineering, ETL development, or database administration. Bachelor’s degree in Mathematics, Computer Science, or Engineering (B.Tech/B.E.). Postgraduate qualification in Data Science or related discipline preferred. Strong proficiency in Python, SQL, Advanced MS Office tools, and R. Familiarity with ML concepts and integrating models into pipelines. Experience with NoSQL systems like MongoDB, Cassandra, or HBase. Knowledge of Snowflake, Databricks, and other cloud-based data tools. ETL tool experience and understanding of data integration best practices. Data modeling skills for relational and NoSQL databases. Knowledge of Hadoop, Spark, and scalable data processing frameworks. Experience with SciKit, TensorFlow, Pytorch, GPT, PySpark, etc. Ability to build web scrapers and collect data from APIs. Experience with Airflow or similar tools for pipeline automation. Strong SQL performance tuning skills in large-scale environments. What We Offer Competitive compensation package based on skills and experience. Opportunity to work with international clients and contribute to high-impact data projects. Continuous learning and professional growth within a tech-forward organization. Collaborative and inclusive work environment. If you're passionate about building data-driven infrastructure to fuel analytics and AI applications, we look forward to connecting with you. Anand Soni Hitech Digital Solutions Show more Show less
Posted 4 days ago
0.0 years
0 Lacs
Thiruvananthapuram, Kerala
On-site
Data Science and AI Developer **Job Description:** We are seeking a highly skilled and motivated Data Science and AI Developer to join our dynamic team. As a Data Science and AI Developer, you will be responsible for leveraging cutting-edge technologies to develop innovative solutions that drive business insights and enhance decision-making processes. **Key Responsibilities:** 1. Develop and deploy machine learning models for predictive analytics, classification, clustering, and anomaly detection. 2. Design and implement algorithms for data mining, pattern recognition, and natural language processing. 3. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. 4. Utilize advanced statistical techniques to analyze complex datasets and extract actionable insights. 5. Implement scalable data pipelines for data ingestion, preprocessing, feature engineering, and model training. 6. Stay updated with the latest advancements in data science, machine learning, and artificial intelligence research. 7. Optimize model performance and scalability through experimentation and iteration. 8. Communicate findings and results to stakeholders through reports, presentations, and visualizations. 9. Ensure compliance with data privacy regulations and best practices in data handling and security. 10. Mentor junior team members and provide technical guidance and support. **Requirements:** 1. Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, or a related field. 2. Proven experience in developing and deploying machine learning models in production environments. 3. Proficiency in programming languages such as Python, R, or Scala, with strong software engineering skills. 4. Hands-on experience with machine learning libraries/frameworks such as TensorFlow, PyTorch, Scikit-learn, or Spark MLlib. 5. Solid understanding of data structures, algorithms, and computer science fundamentals. 6. Excellent problem-solving skills and the ability to think creatively to overcome challenges. 7. Strong communication and interpersonal skills, with the ability to work effectively in a collaborative team environment. 8. Certification in Data Science, Machine Learning, or Artificial Intelligence (e.g., Coursera, edX, Udacity, etc.). 9. Experience with cloud platforms such as AWS, Azure, or Google Cloud is a plus. 10. Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka) is an advantage. Data Manipulation and Analysis : NumPy, Pandas Data Visualization : Matplotlib, Seaborn, Power BI Machine Learning Libraries : Scikit-learn, TensorFlow, Keras Statistical Analysis : SciPy Web Scrapping : Scrapy IDE : PyCharm, Google Colab HTML/CSS/JavaScript/React JS Proficiency in these core web development technologies is a must. Python Django Expertise: In-depth knowledge of e-commerce functionalities or deep Python Django knowledge. Theming: Proven experience in designing and implementing custom themes for Python websites. Responsive Design: Strong understanding of responsive design principles and the ability to create visually appealing and user-friendly interfaces for various devices. Problem Solving: Excellent problem-solving skills with the ability to troubleshoot and resolve issues independently. Collaboration: Ability to work closely with cross-functional teams, including marketing and design, to bring creative visions to life. interns must know about how to connect front end with datascience Also must Know to connect datascience to frontend **Benefits:** - Competitive salary package - Flexible working hours - Opportunities for career growth and professional development - Dynamic and innovative work environment Job Type: Full-time Pay: ₹8,000.00 - ₹12,000.00 per month Schedule: Day shift Ability to commute/relocate: Thiruvananthapuram, Kerala: Reliably commute or planning to relocate before starting work (Preferred) Work Location: In person
Posted 4 days ago
0.0 - 3.0 years
0 Lacs
Naranpura, Ahmedabad, Gujarat
On-site
As Machine Learning Engineer, you’ll be applying your expertise to help us develop a world-leading capability in this exciting and challenging domain. You will be responsible for contributing to the design, development, deployment, testing, maintenance and enhancement of ML software solutions. Primary responsibilities: 1. Applying machine learning, deep learning, and signal processing on large datasets (Audio, sensors, images, videos, text) to develop models. 2. Architecting large scale data analytics / modeling systems. 3. Designing and programming machine learning methods and integrating them into our ML framework / pipeline. 4. Work closely with data scientists/analyst to collaborate and support the development of ML data pipelines, platforms and infrastructure 5. Evaluate and validate the analysis with statistical methods. Also presenting this in a lucid form to people not familiar with the domain of data science / computer science. Creation of microservices and APIs for serving ML models and ML services 7. Evaluating new machine learning methods and adopting them for our purposes. 8. Feature engineering to add new features that improve model performance. Required skills: 1. Background and knowledge of recent advances in machine learning, deep learning, natural language processing, and/or image/signal/video processing with 3+ years of professional work experience working on real-world applications. 2. Strong programming background, e.g. Python, Pytorch, MATLAB, C/C++, Java, and knowledge of software engineering concepts (OOP, design patterns). 3. Knowledge of machine learning libraries Tensorflow, Keras, scikit-learn, pyTorch, 4. Excellent mathematical and skills and background, e.g. accuracy, significance tests, visualization, advanced probability concepts 5. Architecting and implementing end-to-end solutions for accelerating experimentation and model building 6. Working knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) 7. Ability to perform both independent and collaborative research. 8. Excellent written and spoken communication skills. Preferred qualification and experience: B.E.\B. Tech\B.S. candidates' entries with 3+ years of experience in the aforementioned fields will be considered. M.E.\M.S.\M. Tech\PhD preferably in fields related to Computer Science with experience in machine learning, image and signal processing, or statistics preferred. Job Types: Full-time, Permanent Pay: ₹668,717.16 - ₹1,944,863.46 per year Benefits: Flexible schedule Paid sick time Paid time off Schedule: Day shift Supplemental Pay: Yearly bonus Ability to commute/relocate: Naranpura, Ahmedabad, Gujarat: Reliably commute or planning to relocate before starting work (Preferred) Experience: AI/ML: 3 years (Required) Work Location: In person
Posted 4 days ago
0.0 - 2.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Category Data Science Experience Sr. Associate Primary Address Bangalore, Karnataka Overview Voyager (94001), India, Bangalore, Karnataka Senior Associate- Data Science About the Company At Capital One data is at the center of everything we do. As a startup, we disrupted the credit card industry by individually personalizing every credit card offer using statistical modeling and the relational database, cutting edge technology in 1988! Fast-forward a few years, and this little innovation and our passion for data has skyrocketed us to a Fortune 200 company and a leader in the world of data-driven decision-making. Today, we are a high-tech company, a scientific laboratory, and a nationally recognized brand all in one impacting over 65 million customer accounts. Still founder-led by Chairman and CEO Richard Fairbank, we dare to dream, disrupt, and deliver a better way for our customers, the financial industry and for each other. As a data scientist at Capital One, you’ll be part of a team that’s leading the next wave of disruption at a whole new scale, using the latest in computing and machine learning technologies and operating across billions of customer records to unlock the big opportunities that help everyday people save money, time and agony in their financial lives. Team Description At DataLabs Capital One India, we are at the cutting edge of solving some of the fundamental business problems using advanced data methodologies, statistics and machine learning algorithms. In the DataLabs Model Risk Management team, we defend the company against model failures and find new ways of making better decisions with models. We use our statistics, software engineering, and business expertise to drive the best outcomes in both Risk Management and the Enterprise. We also understand that we can’t prepare for tomorrow by focusing on today, so we invest in the future: investing in new skills, building better tools, and maintaining a network of trusted partners. We learn from past mistakes and develop increasingly powerful techniques to avoid their repetition. Role Description In this role you will: Partner cross-functionally with data scientists, quantitative analysts, business analysts, software engineers, and project managers to manage the risk and uncertainty inherent in statistical and machine learning models in order to lead Capital One to the best decisions, not just avoid the worst ones. Build and validate statistical and machine learning models through all phases of development, from design through training, evaluation and implementation Develop new ways of identifying weak spots in model predictions earlier and with more confidence than the best available methods Assess, challenge, and at times defend state-of-the-art decision-making systems to internal and regulatory partners Leverage a broad stack of technologies — Python, R, Conda, AWS, and more — to reveal the insights hidden within huge volumes of data Build upon your existing machine learning and statistical toolset - both by learning new technologies and by building custom software tools for data exploration, model performance evaluation, and more Communicate technical subject matter clearly and concisely to individuals from various backgrounds both verbally and through written communication; prepare presentations of complex technical concepts and research results to non-specialist audiences and senior management Flex your interpersonal skills to translate the complexity of your work into tangible business goals, and challenge model developers to advance their modeling, data, and analytic capabilities The ideal candidate is: Inquisitive. You thrive on bringing definition to big, undefined problems. You love asking questions and pushing hard to find answers. You’re not afraid to share a new idea. Technical. You’re comfortable with open-source languages and are passionate about developing further. You have hands-on experience developing data science solutions using open-source tools and cloud computing platforms. Statistically-minded. You’ve built models, validated them, and backtested them. You know how to interpret a confusion matrix or a ROC curve. You have experience with clustering, classification, time series, and deep learning. Innovative. You continually research and evaluate emerging technologies. You stay current on published state-of-the-art methods, technologies, and applications and seek out opportunities to apply them. Basic Qualifications: Degree in statistics, math, engineering, economics, econometrics, financial engineering, finance, or operations research with a quantitative emphasis preferred Atleast 2 years relevant work experience Experience in Python or R Preferred Skills: Proficiency in key econometric and statistical techniques (such as predictive modeling, logistic regression, panel data models, decision trees, machine learning methods) Atleast 2 years of experience model development or validation Atleast 2 years of experience in R or Python for large scale data analysis Atleast 2 years of experience with relational databases and SQL Strong analytical skills with high attention to detail and accuracy Excellent written and verbal communication skills No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections 4901-4920; New York City’s Fair Chance Act; Philadelphia’s Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1-800-304-9102 or via email at RecruitingAccommodation@capitalone.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. For technical support or questions about Capital One's recruiting process, please send an email to Careers@capitalone.com Capital One does not provide, endorse nor guarantee and is not liable for third-party products, services, educational tools or other information available through this site. Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC). This carousel contains a column of headings. Selecting a heading will change the main content in the carousel that follows. Use the Previous and Next buttons to cycle through all the options, use Enter to select. This carousel shows one item at a time. Use the preceding navigation carousel to select a specific heading to display the content here. How We Hire We take finding great coworkers pretty seriously. Step 1 Apply It only takes a few minutes to complete our application and assessment. Step 2 Screen and Schedule If your application is a good match you’ll hear from one of our recruiters to set up a screening interview. Step 3 Interview(s) Now’s your chance to learn about the job, show us who you are, share why you would be a great addition to the team and determine if Capital One is the place for you. Step 4 Decision The team will discuss — if it’s a good fit for us and you, we’ll make it official! How to Pick the Perfect Career Opportunity Overwhelmed by a tough career choice? Read these tips from Devon Rollins, Senior Director of Cyber Intelligence, to help you accept the right offer with confidence. Your wellbeing is our priority Our benefits and total compensation package is designed for the whole person. Caring for both you and your family. Healthy Body, Healthy Mind You have options and we have the tools to help you decide which health plans best fit your needs. Save Money, Make Money Secure your present, plan for your future and reduce expenses along the way. Time, Family and Advice Options for your time, opportunities for your family, and advice along the way. It’s time to BeWell. Career Journey Here’s how the team fits together. We’re big on growth and knowing who and how coworkers can best support you.
Posted 4 days ago
0.0 - 6.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Category: Software Development/ Engineering Main location: India, Karnataka, Bangalore Position ID: J0425-0003 Employment Type: Full Time Position Description: At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve. At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve results for our clients - and for our members. Come grow with us. Learn more at www.cgi.com. This is a great opportunity to join a winning team. CGI offers a competitive compensation package with opportunities for growth and professional development. Benefits for full-time, permanent members start on the first day of employment and include a paid time-off program and profit participation and stock purchase plans. We wish to thank all applicants for their interest and effort in applying for this position, however, only candidates selected for interviews will be contacted. No unsolicited agency referrals please. Job Title: Senior Software engineer Position: SAP Basis Developer Experience: 6 years of experience Category: Software Development Work location: India, Karnataka, Bangalore Position ID: J0325-1580 Employment Type: Full Time / Permanent Qualification: Bachelor of Engineering in Computer Science or Master of Engineering in Computer Science As an SAP Basis Developer you are responsible for the administration, configuration, maintenance and performance optimization of SAP landscapes. You ensure smooth operation of SAP applications by managing system architecture, security, databases, and integrations. Must-Have Skills: Install, configure, and upgrade SAP systems (S/4HANA, ECC, BW, PI, CRM, etc.). Monitor system performance, logs, and job scheduling. Manage SAP instances, clients, and landscapes (DEV, QA, PROD). Handle SAP Patch and Kernel Upgrades. Administer SAP-supported databases (HANA, Oracle, SQL Server, DB2). Perform database tuning, indexing, and optimization. Manage backups, restores, and database refreshes. Implement data archiving and cleanup strategies. Create and manage SAP user accounts, roles, and authorizations. Ensure security compliance (SOX, GDPR, internal audit policies). Configure Single Sign-On (SSO) and LDAP authentication. Monitor and troubleshoot authorization issues. Manage SAP Transport Requests between different system landscapes. Troubleshoot transport failures and inconsistencies. Work with SAP Solution Manager (ChaRM) for controlled change management. Optimize work processes, memory, and buffer settings. Use ST22, ST02, ST04, ST06 for system analysis and issue resolution. Perform runtime analysis and background job monitoring. Apply security patches and kernel upgrades. Monitor and secure RFC connections and SAP Gateway. Implement SAP security best practices and compliance measures. Configure SAP Failover, Load Balancing, and Clustering. Implement High Availability (HA) and Disaster Recovery (DR) strategies. Ensure business continuity with failover testing and monitoring. Manage SAP Web Dispatcher, SAP Gateway, and Fiori Launchpad. Support SAP Cloud Platform (BTP, CPI, HANA Cloud). Work with SAP Hybrid Deployments (On-Premise & Cloud). Good-to-Have Skills: DevOps & CI/CD – Exposure to Jenkins, Git, and automated deployment pipelines for SAP landscapes. SAP HANA Administration – HANA DB optimization, backup, recovery, and performance tuning. SAP Fiori & UI5 – Understanding of SAP Fiori architecture, SAP Gateway, and OData services. Knowledge in Agile Skills: English Client Management Engineer What you can expect from us: Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.
Posted 4 days ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Level AI was founded in 2019 and is a Series C startup headquartered in Mountain View, California. Level AI revolutionizes customer engagement by transforming contact centers into strategic assets. Our AI-native platform leverages advanced technologies such as Large Language Models to extract deep insights from customer interactions. By providing actionable intelligence, Level AI empowers organizations to enhance customer experience and drive growth. Consistently updated with the latest AI innovations, Level AI stands as the most adaptive and forward-thinking solution in the industry. Empowering contact center stakeholders with real-time insights, our tech facilitates data-driven decision-making for contact centers, enhancing service levels and agent performance. As a vital team member, your work will be cutting-edge technologies and will play a high-impact role in shaping the future of AI-driven enterprise applications. You will directly work with people who've worked at Amazon, Facebook, Google, and other technology companies in the world. With Level AI, you will get to have fun, learn new things, and grow along with us. Ready to redefine possibilities? Join us! We'll love to explore more about you if you have Qualification: B.E/ B.Tech/M.E/M.Tech/PhD from tier 1 engineering institutes with relevant work experience with a top technology company in computer science or mathematics-related fields with 3-5 years of experience in machine learning and NLP Knowledge and practical experience in solving NLP problems in areas such as text classification, entity tagging, information retrieval, question-answering, natural language generation, clustering, etc 3+ years of experience working with LLMs in large-scale environments. Expert knowledge of machine learning concepts and methods, especially those related to NLP, Generative AI, and working with LLMs Knowledge and hands-on experience with Transformer-based Language Models like BERT, DeBERTa, Flan-T5, Mistral, Llama, etc Deep familiarity with internals of at least a few Machine Learning algorithms and concepts Experience with Deep Learning frameworks like Pytorch and common machine learning libraries like scikit-learn, numpy, pandas, NLTK, etc Experience with ML model deployments using REST API, Docker, Kubernetes, etc Knowledge of cloud platforms (AWS/Azure/GCP) and their machine learning services is desirable Knowledge of basic data structures and algorithms Knowledge of real-time streaming tools/architectures like Kafka, Pub/Sub is a plus Your role at Level AI includes but is not limited to Big picture: Understand customers’ needs, innovate and use cutting edge Deep Learning techniques to build data-driven solutions Work on NLP problems across areas such as text classification, entity extraction, summarization, generative AI, and others Collaborate with cross-functional teams to integrate/upgrade AI solutions into the company’s products and services Optimize existing deep learning models for performance, scalability, and efficiency Build, deploy, and own scalable production NLP pipelines Build post-deployment monitoring and continual learning capabilities. Propose suitable evaluation metrics and establish benchmarks Keep abreast with SOTA techniques in your area and exchange knowledge with colleagues Desire to learn, implement and work with latest emerging model architectures, training and inference techniques, data curation pipelines, etc. To learn more visit : https://thelevel.ai/ Funding : https://www.crunchbase.com/organization/level-ai LinkedIn : https://www.linkedin.com/company/level-ai/ Show more Show less
Posted 4 days ago
8.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
We are looking for 8+years experienced candidates for this role. Job Location : Technopark, Trivandrum Experience 8+ years of experience in Microsoft SQL Server administration Primary Skills Strong experience in Microsoft SQL Server administration Qualifications Bachelor's degree in computer science, software engineering or a related field Microsoft SQL certifications (MTA Database, MCSA : SQL Server, MCSE : Data Management and Analytics) will be an advantage. Secondary Skills Experience in MySQL, PostgreSQL, and Oracle database administration. Exposure to Data Lake, Hadoop, and Azure technologies Exposure to DevOps or ITIL Main Duties/responsibilities Optimize database queries to ensure fast and efficient data retrieval, particularly for complex or high-volume operations. Design and implement effective indexing strategies to reduce query execution times and improve overall database performance. Monitor and profile slow or inefficient queries and recommend best practices for rewriting or re-architecting queries. Continuously analyze execution plans for SQL queries to identify bottlenecks and optimize them. Database Maintenance : Schedule and execute regular maintenance tasks, including backups, consistency checks, and index rebuilding. Health Monitoring : Implement automated monitoring systems to track database performance, availability, and critical parameters such as CPU usage, memory, disk I/O, and replication status. Proactive Issue Resolution : Diagnose and resolve database issues (e.g., locking, deadlocks, data corruption) proactively, before they impact users or operations. High Availability : Implement and manage database clustering, replication, and failover strategies to ensure high availability and disaster recovery (e.g., using tools like SQL Server Always On, Oracle RAC, MySQL Group Replication). (ref:hirist.tech) Show more Show less
Posted 4 days ago
7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Skills: T-SQL, SQL, SSIS, SSRS, High Availability (HA), Disaster Recovery, ETL, Greetings from Colan Infotech!! Designation - SQL DBA Experience - 7+ Years Job Location - Chennai Notice Period - Immediate to 15 Days Key Responsibilities Manage and maintain high-performance SQL Server databases supporting critical capital markets applications Perform backup, recovery, high availability (HA), and disaster recovery (DR) planning and implementation Optimize SQL queries, indexes, and database performance for large datasets typical of trading and market data systems Ensure data integrity and security in line with regulatory and compliance requirements Work closely with application development and infrastructure teams to support database integration and deployment Monitor database health, generate reports, and provide proactive solutions to potential issues Lead database upgrade and migration projects Support real-time data flows and batch processes used in trading, settlements, and market data analysis Implement and maintain replication, clustering, log shipping, and Always On availability groups. Required Skills & Qualifications 7+ years of experience as a MS SQL DBA Strong knowledge of SQL Server 2016/2019/2022, including internals, query tuning, and HA/DR features Experience working in Capital Markets, with understanding of trading systems, order management, or market data Solid understanding of T-SQL, performance tuning, and execution plans Familiarity with financial data handling, compliance (e.g., MiFID, FINRA), and low-latency data operations Experience with automation and scripting using PowerShell Strong troubleshooting and problem-solving skills Knowledge of data warehousing, ETL processes, and reporting tools (SSIS, SSRS) Ability to work in fast-paced, high-pressure financial environments Preferred Qualifications Experience with cloud-based SQL solutions (Azure SQL, AWS RDS, etc.) Exposure to DevOps practices and CI/CD for databases Certification in Microsoft SQL Server (e.g., MCSA, MCSE) Interested candidates send your updated resume to kumudha.r@colanonine.com Show more Show less
Posted 4 days ago
4.0 - 6.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Location : Ahmedabad Experience : 4-6 years Title : Data Scientist Job Description Primary Responsibilities : Analyze large and complex datasets to identify trends, patterns, and insights. Develop and implement machine learning models for prediction, classification, and clustering tasks using Python libraries like scikit-learn, TensorFlow, or PyTorch. Perform statistical analysis and hypothesis testing to validate findings and draw meaningful conclusions. Design and implement data visualization dashboards and reports using Python libraries like Matplotlib, Seaborn, or Plotly to communicate insights effectively. Collaborate with cross-functional teams to understand business requirements and translate them into data science solutions. Build and deploy scalable data pipelines using Python and related tools. Stay up-to-date with the latest advancements in data science, machine learning, and Python libraries. Communicate complex data insights and findings to both technical and non-technical audiences. What You'll Bring Proven experience as a Data Scientist or similar role. Strong programming skills in Python, with experience in relevant data science libraries (e.g., Pandas, NumPy, Scikit-learn). Solid understanding of statistical concepts, machine learning algorithms, and data modeling techniques. Experience with data visualization using Python libraries (e.g., Matplotlib, Seaborn, Plotly). Ability to work with large datasets and perform data cleaning, preprocessing, and feature engineering. Strong problem-solving and analytical skills. Excellent communication and presentation abilities. Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and their data science services. Experience with deep learning frameworks (e.g., TensorFlow, PyTorch). Knowledge of database technologies (SQL and NoSQL). Experience with deploying machine learning models into production. (ref:hirist.tech) Show more Show less
Posted 4 days ago
3.0 - 6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Designation : Data Scientist Location : Baner, Pune About The Role At Haber, we are redefining how industries operate by blending automation with intelligent decision-making. As a Data Scientist, you will play a pivotal role in designing machine learning systems that power our suite of innovative productsincluding Elixa, Kaiznn, Mount Fuji, and upcoming platforms like our Fibre Morphology solution. Youll work extensively with time-series data, sensor feeds, and computer vision pipelines to uncover actionable insights in real-time. This is a high-impact, high-ownership role where your work will directly influence operational efficiency and sustainability across complex industrial environments. We value independent thinkers who are not just technical experts but also strategic problem-solvers. Key Responsibilities Lead the full machine learning system development lifecycle : data collection, cleaning, feature engineering, modeling, deployment, and evaluation Work with time-series sensor data and computer vision systems to build predictive and anomaly detection models Design and develop scalable recommendation platforms for real-time industrial decision-making Apply machine learning, data mining, and statistical techniques (e.g., regression, clustering, collaborative filtering, PCA) to a wide range of problems Design novel approaches to large-scale data analysis, including semi-structured and unstructured data Collaborate with engineering, QA, product, and operations teams to deliver end-to-end solutions Contribute to and stay up-to-date with research in areas such as machine learning, signal processing, and domain-specific optimization Make independent technical decisions and guide strategic direction based on data insights Mentor junior team members and foster a collaborative learning culture Qualification Masters/Bachelors in Engineering/Mathematics/allied fields 3-6 Years of experience in Data Science and ML Strong programming skills in Python or R Proven experience working with time-series data and real-time data pipelines Solid foundation in supervised and unsupervised ML methods Exposure to sensor data, image processing, or computer vision techniques is a strong plus Experience with cloud environments, preferably AWS, for model deployment and monitoring Sound understanding of statistics, optimization, and data visualization Ability to work independently and drive projects from idea to execution Strong communication skills and the ability to collaborate across multidisciplinary teams A passion for solving tough, real-world problems using data What We Offer Real-World Impact : Your models will be deployed in operational environments, not just experimental labs Ownership & Autonomy : Freedom to explore, decide, and build in a high-trust environment Diverse Tech Stack : Work on ML pipelines that combine time-series, sensor, and vision data Fast-Track Learning : Be part of a startup where learning never stops and innovation is constant Collaborative Culture : A team that supports initiative, experimentation, and personal growth Product-Minded Thinking : Your work will be tightly integrated with product and business decisions (ref:hirist.tech) Show more Show less
Posted 4 days ago
10.0 years
0 Lacs
Navi Mumbai, Maharashtra, India
Remote
Skills: Windows, VMWare Platform, Patch Management, VMware Infrastructure, Hypervisor, Internet Protocol Suite (TCP/IP), Job Description Position : Windows & VMware L2/L3/ SME Job Purpose Design, Solution, Troubleshoot and Architect for Windows/ VMWare Platform. Technical Ensure the 24/7/365 availability of all systems, applications, and infrastructure Configure new servers, applications, updates, and develop processes and procedures that fulfil the support of secure systems. Create and manage security and distribution groups, including managing group memberships. Troubleshoot and resolve Active Directory-related issues, including authentication problems, replication issues, and directory service errors. Implement and maintain Active Directory Certificate Services (AD CS) for issuing and managing digital certificates within the organization. Design, implement, and maintain Active Directory services, including domain controllers, DNS, Windows Failover Cluster etc Create and organize OUs to structure the AD hierarchy efficiently. Create, edit, and link Group Policy Objects (GPOs) to OUs. Assign permissions to users or groups for specific OUs or objects within the AD. Diagnose and resolve complex AD-related issues. Manage security group memberships, which can impact resource access. Extend or modify the AD schema for custom attributes or classes. Manage complex GPOs, including scripting and software deployment. Implement auditing policies and ensure AD compliance with security policies. Plan for AD scalability and high availability to ensure business continuity. Implement automation scripts and tools for routine tasks and reporting. Optimize AD performance by monitoring and tuning AD component Participate in anticipating, mitigating, identifying, troubleshooting, and resolving hardware and software issues on servers in a timely and accurate fashion. Coordinate with other staff, to prioritize and work on continuing to improve existing systems. Review new technology according to company direction. Advanced Remote Desktop and Terminal Server experience. VMWARE Cloud, Virtualization, Server and Storage vendor Certifications Preferred. Ten plus years of previous architecture experience, across multiple vendor and technologies. In-depth understanding of Cloud, Server, Storage, and Virtualization architecture design techniques, theories, principles, and practices. Solid experience designing enterprise Cloud, Virtualization, Server and Storage systems within a broad spectrum of technologies. Excellent Knowledge of Private, Public and Hybrid Cloud environments Excellent Knowledge of Cloud Orchestration platforms Excellent knowledge of Windows Server, Linux, in HA/DR type configurations. Excellent knowledge of Sever hardware platforms: x86 Excellent knowledge of VMWare ESX, vSphere, vRa Excellent knowledge of SAN, NAS, DAS, RAID, SCSI, Fiber Channel, Backup to Disk, clustering, encryption technologies from multiple vendors. Accountabilities Attend Customers request and issue and have ability to understand and resolve Knowledge Transfer & Knowledgebase creation Knowledge Transfer Own internal trainings and workshops of his area Create and Publish Knowledge Base articles and WIKI Documents Strengthen the team knowledge level in proactive manner. Outcomes/KPIs Solution Share at an Individual Level Quality of Solution No follow-up issues Contribution to Knowledge building, Documenting Lessons Learned, Effective cascading of Knowledge. Quick Initial Response time. Work Environment And Benefits Opportunity to research and resolve problems even if the solution isnt at your fingertips. Availability for rotating shifts depending upon business needs. There will also be rotating morning, evening shifts and night Shifts. Education And Qualifications Bachelors degree in Engineering or any other equaling Work Experience Minimum 10+ years of Architecture level experience in Windows & VMware Solutioning along with very good Cloud skills. Interested candidates can share their resume with below mandatory details Full Name (As per Aadhaar Card) Contact Number Email ID Current Company Total Experience Relevant Experience Current Salary (Fixed + Variable) Expected Salary CTC of offer in hand (if any) Primary Skills Notice Period/LWD Current Location Preferred Location Highest Qualification & % 10th % 12th % or Diploma % Graduation % Post-Graduation % (if yes) Show more Show less
Posted 4 days ago
10.0 years
0 Lacs
Mumbai Metropolitan Region
Remote
Skills: Windows, VMWare Platform, Patch Management, VMware Infrastructure, Hypervisor, Internet Protocol Suite (TCP/IP), Job Description Position : Windows & VMware L2/L3/ SME Job Purpose Design, Solution, Troubleshoot and Architect for Windows/ VMWare Platform. Technical Ensure the 24/7/365 availability of all systems, applications, and infrastructure Configure new servers, applications, updates, and develop processes and procedures that fulfil the support of secure systems. Create and manage security and distribution groups, including managing group memberships. Troubleshoot and resolve Active Directory-related issues, including authentication problems, replication issues, and directory service errors. Implement and maintain Active Directory Certificate Services (AD CS) for issuing and managing digital certificates within the organization. Design, implement, and maintain Active Directory services, including domain controllers, DNS, Windows Failover Cluster etc Create and organize OUs to structure the AD hierarchy efficiently. Create, edit, and link Group Policy Objects (GPOs) to OUs. Assign permissions to users or groups for specific OUs or objects within the AD. Diagnose and resolve complex AD-related issues. Manage security group memberships, which can impact resource access. Extend or modify the AD schema for custom attributes or classes. Manage complex GPOs, including scripting and software deployment. Implement auditing policies and ensure AD compliance with security policies. Plan for AD scalability and high availability to ensure business continuity. Implement automation scripts and tools for routine tasks and reporting. Optimize AD performance by monitoring and tuning AD component Participate in anticipating, mitigating, identifying, troubleshooting, and resolving hardware and software issues on servers in a timely and accurate fashion. Coordinate with other staff, to prioritize and work on continuing to improve existing systems. Review new technology according to company direction. Advanced Remote Desktop and Terminal Server experience. VMWARE Cloud, Virtualization, Server and Storage vendor Certifications Preferred. Ten plus years of previous architecture experience, across multiple vendor and technologies. In-depth understanding of Cloud, Server, Storage, and Virtualization architecture design techniques, theories, principles, and practices. Solid experience designing enterprise Cloud, Virtualization, Server and Storage systems within a broad spectrum of technologies. Excellent Knowledge of Private, Public and Hybrid Cloud environments Excellent Knowledge of Cloud Orchestration platforms Excellent knowledge of Windows Server, Linux, in HA/DR type configurations. Excellent knowledge of Sever hardware platforms: x86 Excellent knowledge of VMWare ESX, vSphere, vRa Excellent knowledge of SAN, NAS, DAS, RAID, SCSI, Fiber Channel, Backup to Disk, clustering, encryption technologies from multiple vendors. Accountabilities Attend Customers request and issue and have ability to understand and resolve Knowledge Transfer & Knowledgebase creation Knowledge Transfer Own internal trainings and workshops of his area Create and Publish Knowledge Base articles and WIKI Documents Strengthen the team knowledge level in proactive manner. Outcomes/KPIs Solution Share at an Individual Level Quality of Solution No follow-up issues Contribution to Knowledge building, Documenting Lessons Learned, Effective cascading of Knowledge. Quick Initial Response time. Work Environment And Benefits Opportunity to research and resolve problems even if the solution isnt at your fingertips. Availability for rotating shifts depending upon business needs. There will also be rotating morning, evening shifts and night Shifts. Education And Qualifications Bachelors degree in Engineering or any other equaling Work Experience Minimum 10+ years of Architecture level experience in Windows & VMware Solutioning along with very good Cloud skills. Interested candidates can share their resume with below mandatory details Full Name (As per Aadhaar Card) Contact Number Email ID Current Company Total Experience Relevant Experience Current Salary (Fixed + Variable) Expected Salary CTC of offer in hand (if any) Primary Skills Notice Period/LWD Current Location Preferred Location Highest Qualification & % 10th % 12th % or Diploma % Graduation % Post-Graduation % (if yes) Show more Show less
Posted 4 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Join us as a MSSQL Database-Site Reliability Engineering at Barclays, responsible for supporting the successful delivery of Location Strategy projects to plan, budget, agreed quality and governance standards. You'll spearhead the evolution of our digital landscape, driving innovation and excellence. You will harness cutting-edge technology to revolutionise our digital offerings, ensuring unparalleled customer experiences. To be successful as a MSSQL Database-Site Reliability Engineering you should have experience with: Willingness to learn and investigate new products. Experience with setup/configuration/support of SQL Server 2014,2016, 2017, 2019, 2022. Proven track record of implementing and leading SRE practices across large organizations or complex teams. Expert-level knowledge of telemetry, monitoring, and platform observability tooling (e.g., ESAAS etc) with experience in customizing and scaling these solutions. Experience in implementing modern DevOps and SRE practices in enterprise environments. Proven expertise in SQL query optimization and database performance tuning at scale. Experience with DevOps automation tools such as Code versioning (git), JIRA, Ansible, Containers and Kubernetes , database CI/CD tools and their implementation. Hands-on as DevOps on Ansible, Python, T SQL coding. Experience configuring and all components of the MS SQL set Core DB engine and SSRS, SSAS, SSIS, Replication topologies, AlwaysOn, Service Broker, Log Shipping, Database Snapshots etc and Windows Clustering. Must have extensive experience as Production support SQL Server DBA - strong experience in large, high volume and high criticality environments. Database maintenance and troubleshooting. Must exhibit in-depth expertise in solving database contention problems (dead locks, blocking etc). Demonstrate extensive expertise in performance tuning including both query and server optimization. Expert knowledge of backup and recovery. Good knowledge on PowerShell scripting. The resource would be handling major escalated incidents, complex problem record. This includes periodic review of systems, capacity management, on boarding new services, patching coordination & maintain good health of these. Take responsibility for completing tasks, focusing on requirements and planning to meet client needs, which the job holder helps to identify, advise and recommend technical solutions based on experience and Industry knowledge. Driving Automations agenda - Identifying the possibility of automation in database area and work to deliver it. Some Other Highly Valued Skills May Include Desirable experience on database automation will be preferred. Strong skills in designing and delivering platforms to hosts SQL server, covering HA and DR, at Enterprise scale. Experience with tape backup tools like TSM/TDP/DDBoost/BoostFs/Rubrik etc. Knowledge of the ITIL framework, vocabulary and best practices. Understanding of Cloud Based Computing particularly RDS\Azure. Expert in system configuration management tools such as Chef, Ansible for database server configurations. Expert expertise with scripting languages(e.g.PowerShell,Python) for automation/migration tasks. You may be assessed on the key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen strategic thinking and digital and technology, as well as job-specific technical skills. This role is based in Pune. Purpose of the role To apply software engineering techniques, automation, and best practices in incident response, to ensure the reliability, availability, and scalability of the systems, platforms, and technology through them. Accountabilities Availability, performance, and scalability of systems and services through proactive monitoring, maintenance, and capacity planning. Resolution, analysis and response to system outages and disruptions, and implement measures to prevent similar incidents from recurring. Development of tools and scripts to automate operational processes, reducing manual workload, increasing efficiency, and improving system resilience. Monitoring and optimisation of system performance and resource usage, identify and address bottlenecks, and implement best practices for performance tuning. Collaboration with development teams to integrate best practices for reliability, scalability, and performance into the software development lifecycle, and work closely with other teams to ensure smooth and efficient operations. Stay informed of industry technology trends and innovations, and actively contribute to the organization's technology communities to foster a culture of technical excellence and growth. Assistant Vice President Expectations To advise and influence decision making, contribute to policy development and take responsibility for operational effectiveness. Collaborate closely with other functions/ business divisions. Lead a team performing complex tasks, using well developed professional knowledge and skills to deliver on work that impacts the whole business function. Set objectives and coach employees in pursuit of those objectives, appraisal of performance relative to objectives and determination of reward outcomes If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others. OR for an individual contributor, they will lead collaborative assignments and guide team members through structured assignments, identify the need for the inclusion of other areas of specialisation to complete assignments. They will identify new directions for assignments and/ or projects, identifying a combination of cross functional methodologies or practices to meet required outcomes. Consult on complex issues; providing advice to People Leaders to support the resolution of escalated issues. Identify ways to mitigate risk and developing new policies/procedures in support of the control and governance agenda. Take ownership for managing risk and strengthening controls in relation to the work done. Perform work that is closely related to that of other areas, which requires understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Collaborate with other areas of work, for business aligned support areas to keep up to speed with business activity and the business strategy. Engage in complex analysis of data from multiple sources of information, internal and external sources such as procedures and practises (in other areas, teams, companies, etc).to solve problems creatively and effectively. Communicate complex information. 'Complex' information could include sensitive information or information that is difficult to communicate because of its content or its audience. Influence or convince stakeholders to achieve outcomes. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave. Back to nav Share job X(Opens in new tab or window) Facebook(Opens in new tab or window) LinkedIn(Opens in new tab or window) Show more Show less
Posted 4 days ago
4.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
We are looking for a Senior Data Scientist to join our team and apply your expertise in statistical data analysis, machine learning, and NLP to design and deliver transformative AI solutions. As a Senior Data Scientist, you will actively contribute to complex projects requiring full lifecycle involvement, from data preparation to model deployment, collaborating with cross-functional teams and creating production-ready outcomes. Responsibilities Develop AI solutions such as classification, clustering, anomaly detection, and NLP Apply advanced statistical methods and machine learning algorithms to address complex challenges Utilize Python and SQL for production-level code and thorough data analysis Build model workflows, including ML Ops and feature engineering techniques Leverage Azure AI Search and similar tools to provide stakeholders with insights and model accessibility Work closely with developers and project managers, using tools like GitLab for version control and Jira for project tracking Optimize data pipelines and improve model performance for practical applications Present technical concepts in an easy-to-understand manner to diverse audiences Stay informed about emerging technologies and incorporate them to solve real-world problems Follow Agile practices and demonstrate understanding of UNIX command line tools Requirements 4+ years of relevant experience in Data Science Proficiency in statistical data analysis, machine learning, and NLP for practical problem-solving Expertise in Python programming and SQL with demonstrated use of data analysis libraries Background in creating AI solutions such as classification, clustering, anomaly detection, or NLP Familiarity with ML Ops, feature engineering, and managing model development workflows Flexibility to apply tools like Azure AI Search for business-ready model deployment Competency in software development practices and tools like GitLab Knowledge of Agile methodologies and project tracking with tools like Jira Understanding of UNIX command line operations and creative problem-solving with innovative technologies B2 level of English or higher, especially in technical communication Nice to have Knowledge of Cloud Computing, Big Data tools, and containerization technologies Skills in data visualization tools to effectively convey insights Show more Show less
Posted 4 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2