Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
You are looking for a skilled and analytical Data Analyst with expertise in data modeling, data analysis, and Python programming. As a Data Analyst, you will be responsible for designing data models, conducting in-depth analysis, and creating automated solutions to facilitate business decision-making and reporting. Your key responsibilities will include designing and implementing conceptual, logical, and physical data models to support analytics and reporting. You will analyze large datasets to uncover trends, patterns, and insights that drive business decisions. Additionally, you will develop and maintain Python scripts for data extraction, transformation, and analysis. Collaboration with data engineers, business analysts, and stakeholders to comprehend data requirements is essential. Creating dashboards, reports, and visualizations to effectively communicate findings will be part of your role. Ensuring data quality, consistency, and integrity across systems, as well as documenting data definitions, models, and analysis processes, will also be key responsibilities. The ideal candidate for this position should have strong experience in data modeling, including ER diagrams, normalization, and dimensional modeling. Proficiency in Python for data analysis using Pandas, NumPy, Matplotlib, etc., is required. A solid understanding of SQL and relational databases is necessary, along with experience in data visualization tools such as Power BI, Tableau, or matplotlib/seaborn. You should be able to translate business requirements into technical solutions and possess excellent analytical, problem-solving, and communication skills. Virtusa values teamwork, quality of life, professional and personal development. Joining Virtusa means becoming part of a global team of 27,000 individuals who are dedicated to your growth. You will have the opportunity to work on exciting projects and leverage state-of-the-art technologies throughout your career with us. At Virtusa, collaboration and a team-oriented environment are paramount, providing great minds with a dynamic space to cultivate new ideas and promote excellence.,
Posted 1 week ago
6.0 years
12 - 18 Lacs
Delhi, India
Remote
Skills: Data Modeling, Snowflake, Schemas, Star Schema Design, SQL, Data Integration, Job Title: Senior Data Modeler Experience Required: 6+ Years Location: Remote Employment Type: Full-time / Contract (Remote) Domain: Data Engineering / Analytics / Data Warehousing Job Summary We are seeking an experienced and detail-oriented Data Modeler with a strong background in conceptual, logical, and physical data modeling. The ideal candidate will have in-depth knowledge of Snowflake architecture, data modeling best practices (Star/Snowflake schema), and advanced SQL scripting. You will be responsible for designing robust, scalable data models and working closely with data engineers, analysts, and business stakeholders. Key Responsibilities Data Modeling: Design Conceptual, Logical, and Physical Data Models. Create and maintain Star and Snowflake Schemas for analytical reporting. Perform Normalization and Denormalization based on performance and reporting requirements. Work closely with business stakeholders to translate requirements into optimized data structures. Maintain data model documentation and data dictionary. Snowflake Expertise: Design and implement Snowflake schemas with optimal partitioning and clustering strategies. Perform performance tuning for complex queries and storage optimization. Implement Time Travel, Streams, and Tasks for data recovery and pipeline automation. Manage and secure data using Secure Views and Materialized Views. Optimize usage of Virtual Warehouses and storage costs. SQL & Scripting: Write And Maintain Advanced SQL Queries Including Common Table Expressions (CTEs) Window Functions Recursive queries Build automation scripts for data loading, transformation, and validation. Troubleshoot and optimize SQL queries for performance and accuracy. Support data migration and integration projects. Required Skills & Qualifications 6+ years of experience in Data Modeling and Data Warehouse design. Proven experience with Snowflake platform (min. 2 years). Strong hands-on experience in Dimensional Modeling (Star/Snowflake schemas). Expert in SQL and scripting for automation and performance optimization. Familiarity with tools like Erwin, PowerDesigner, or similar data modeling tools. Experience working in Agile/Scrum environments. Strong analytical and problem-solving skills. Excellent communication and stakeholder engagement skills. Preferred Skills (Nice To Have) Experience with ETL/ELT tools like dbt, Informatica, Talend, etc. Exposure to Cloud Platforms like AWS, Azure, or GCP. Familiarity with Data Governance and Data Quality frameworks.
Posted 1 week ago
7.0 years
0 Lacs
Greater Kolkata Area
On-site
Role We are looking for a Principal Security Content Developer with deep technical expertise in security event data engineering, parsing, and content development across modern SIEM platforms. In this role, you will lead the design and implementation of scalable, high-fidelity security detections, ensuring normalized and enriched data flows into SIEM tools like Microsoft Sentinel, Splunk, and Google Chronicle. This role is ideal for a hands-on security engineer with strong experience in data modeling, parsing, and log source : Architect, develop, and optimize detection content across SIEM platforms such as Microsoft Sentinel, Splunk, and Google Chronicle. Normalize and structure diverse log sources using schemas like Splunk CIM, Microsoft Sentinel, OCSF, and Chronicle UDM to ensure consistent detection across the board. Collaborate with teams, including Threat Labs and Data Engineering, to improve parsing, data transformation, and use case configurations. Perform end-to-end development, customization, and onboarding of supported and custom data sources (EDR, firewall, antivirus, proxies, OS, databases). Repair events with missing or incorrect data, create parser extensions, and manage flow logic for log ingestion pipelines. Conduct log source analysis and maintain robust documentation of data structures, parsing rules, and detection logic. Build and maintain monitoring reports to ensure data pipeline availability and proactively identify performance issues or gaps in data coverage. Continuously evaluate and refine detection content and parsing logic for high fidelity and low false-positive rates. Requirements 7+ years of experience in security engineering, detection content development, or SIEM management. Strong hands-on experience with SIEM platforms, particularly Microsoft Sentinel, Splunk, and Chronicle. Expertise with multiple data models, including Splunk CIM, Sentinel schemas, Chronicle UDM, and OCSF. Experience working with diverse log sources (e. g., EDRs, firewalls, antivirus, proxies, databases, OS logs). Skilled in event parsing, field extraction, normalization, and enrichment for log data. Familiarity with scripting/query languages such as KQL, SPL, and UDM search syntax. Strong understanding of SOC operations, detection engineering workflows, and threat modeling frameworks (MITRE ATT& CK, etc. ). Preferred Qualifications Experience working with cloud-native and hybrid security architectures. Familiarity with data transformation tools and stream processing pipelines. Previous collaboration with threat research or threat intelligence teams. Security certifications such as GCIA, GCTI, or similar are a plus. (ref:hirist.tech)
Posted 1 week ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Position - React & Node.js Developer Experience - 2-5yr Budget-7-8 LPA Location- Mumbai Working Mode- On-Site Skills - React.js, Node.js & Express.js, MySQL, SQL ORM, RESTful APIs Roles and Responsibilities : 1. Frontend – React.js Strong hands-on experience in building responsive user interfaces using React.js Familiarity with modern React features like hooks, context API, and functional components Good understanding of component-based architecture and state management 2. Backend – Node.js & Express.js Proven experience in backend development using Node.js and Express.js Experience in developing and maintaining RESTful APIs Ability to write clean, modular, and scalable code 3. Database – MySQL Solid working knowledge of MySQL Ability to write efficient and optimized SQL queries Understanding of data normalization, indexing, and performance tuning 4. ORM (SQL-based) Experience with at least one SQL ORM like Sequelize, TypeORM, Prisma, or equivalent Understanding of database schema relationships and model management through ORM 5. General Requirements Experience working on real-world projects or production-level applications Ability to collaborate with cross-functional teams and follow best coding practices
Posted 1 week ago
4.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Analyst, Inclusive Innovation & Analytics, Center for Inclusive Growth Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. The Center for Inclusive Growth is the social impact hub at Mastercard. The organization seeks to ensure that the benefits of an expanding economy accrue to all segments of society. Through actionable research, impact data science, programmatic grants, stakeholder engagement and global partnerships, the Center advances equitable and sustainable economic growth and financial inclusion around the world. The Center’s work is at the heart of Mastercard’s objective to be a force for good in the world. Reporting to Vice President, Inclusive Innovation & Analytics, the Analyst, will 1) create and/or scale data, data science, and AI solutions, methodologies, products, and tools to advance inclusive growth and the field of impact data science, 2) work on the execution and implementation of key priorities to advance external and internal data for social strategies, and 3) manage the operations to ensure operational excellence across the Inclusive Innovation & Analytics team. Key Responsibilities Data Analysis & Insight Generation Design, develop, and scale data science and AI solutions, tools, and methodologies to support inclusive growth and impact data science. Analyze structured and unstructured datasets to uncover trends, patterns, and actionable insights related to economic inclusion, public policy, and social equity. Translate analytical findings into insights through compelling visualizations and dashboards that inform policy, program design, and strategic decision-making. Create dashboards, reports, and visualizations that communicate findings to both technical and non-technical audiences. Provide data-driven support for convenings involving philanthropy, government, private sector, and civil society partners. Data Integration & Operationalization Assist in building and maintaining data pipelines for ingesting and processing diverse data sources (e.g., open data, text, survey data). Ensure data quality, consistency, and compliance with privacy and ethical standards. Collaborate with data engineers and AI developers to support backend infrastructure and model deployment. Team Operations Manage team operations, meeting agendas, project management, and strategic follow-ups to ensure alignment with organizational goals. Lead internal reporting processes, including the preparation of dashboards, performance metrics, and impact reports. Support team budgeting, financial tracking, and process optimization. Support grantees and grants management as needed Develop briefs, talking points, and presentation materials for leadership and external engagements. Translate strategic objectives into actionable data initiatives and track progress against milestones. Coordinate key activities and priorities in the portfolio, working across teams at the Center and the business as applicable to facilitate collaboration and information sharing Support the revamp of the Measurement, Evaluation, and Learning frameworks and workstreams at the Center Provide administrative support as needed Manage ad-hoc projects, events organization Qualifications Bachelor’s degree in Data Science, Statistics, Computer Science, Public Policy, or a related field. 2–4 years of experience in data analysis, preferably in a mission-driven or interdisciplinary setting. Strong proficiency in Python and SQL; experience with data visualization tools (e.g., Tableau, Power BI, Looker, Plotly, Seaborn, D3.js). Familiarity with unstructured data processing and robust machine learning concepts. Excellent communication skills and ability to work across technical and non-technical teams. Technical Skills & Tools Data Wrangling & Processing Data cleaning, transformation, and normalization techniques Pandas, NumPy, Dask, Polars Regular expressions, JSON/XML parsing, web scraping (e.g., BeautifulSoup, Scrapy) Machine Learning & Modeling Scikit-learn, XGBoost, LightGBM Proficiency in supervised/unsupervised learning, clustering, classification, regression Familiarity with LLM workflows and tools like Hugging Face Transformers, LangChain (a plus) Visualization & Reporting Power BI, Tableau, Looker Python libraries: Matplotlib, Seaborn, Plotly, Altair Dashboarding tools: Streamlit, Dash Storytelling with data and stakeholder-ready reporting Cloud & Collaboration Tools Google Cloud Platform (BigQuery, Vertex AI), Microsoft Azure Git/GitHub, Jupyter Notebooks, VS Code Experience with APIs and data integration tools (e.g., Airflow, dbt) Ideal Candidate You are a curious and collaborative analyst who believes in the power of data to drive social change. You’re excited to work with cutting-edge tools while staying grounded in the real-world needs of communities and stakeholders. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-253034
Posted 1 week ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Analyst, Inclusive Innovation & Analytics, Center for Inclusive Growth Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. The Center for Inclusive Growth is the social impact hub at Mastercard. The organization seeks to ensure that the benefits of an expanding economy accrue to all segments of society. Through actionable research, impact data science, programmatic grants, stakeholder engagement and global partnerships, the Center advances equitable and sustainable economic growth and financial inclusion around the world. The Center’s work is at the heart of Mastercard’s objective to be a force for good in the world. Reporting to Vice President, Inclusive Innovation & Analytics, the Analyst, will 1) create and/or scale data, data science, and AI solutions, methodologies, products, and tools to advance inclusive growth and the field of impact data science, 2) work on the execution and implementation of key priorities to advance external and internal data for social strategies, and 3) manage the operations to ensure operational excellence across the Inclusive Innovation & Analytics team. Key Responsibilities Data Analysis & Insight Generation Design, develop, and scale data science and AI solutions, tools, and methodologies to support inclusive growth and impact data science. Analyze structured and unstructured datasets to uncover trends, patterns, and actionable insights related to economic inclusion, public policy, and social equity. Translate analytical findings into insights through compelling visualizations and dashboards that inform policy, program design, and strategic decision-making. Create dashboards, reports, and visualizations that communicate findings to both technical and non-technical audiences. Provide data-driven support for convenings involving philanthropy, government, private sector, and civil society partners. Data Integration & Operationalization Assist in building and maintaining data pipelines for ingesting and processing diverse data sources (e.g., open data, text, survey data). Ensure data quality, consistency, and compliance with privacy and ethical standards. Collaborate with data engineers and AI developers to support backend infrastructure and model deployment. Team Operations Manage team operations, meeting agendas, project management, and strategic follow-ups to ensure alignment with organizational goals. Lead internal reporting processes, including the preparation of dashboards, performance metrics, and impact reports. Support team budgeting, financial tracking, and process optimization. Support grantees and grants management as needed Develop briefs, talking points, and presentation materials for leadership and external engagements. Translate strategic objectives into actionable data initiatives and track progress against milestones. Coordinate key activities and priorities in the portfolio, working across teams at the Center and the business as applicable to facilitate collaboration and information sharing Support the revamp of the Measurement, Evaluation, and Learning frameworks and workstreams at the Center Provide administrative support as needed Manage ad-hoc projects, events organization Qualifications Bachelor’s degree in Data Science, Statistics, Computer Science, Public Policy, or a related field. 2–4 years of experience in data analysis, preferably in a mission-driven or interdisciplinary setting. Strong proficiency in Python and SQL; experience with data visualization tools (e.g., Tableau, Power BI, Looker, Plotly, Seaborn, D3.js). Familiarity with unstructured data processing and robust machine learning concepts. Excellent communication skills and ability to work across technical and non-technical teams. Technical Skills & Tools Data Wrangling & Processing Data cleaning, transformation, and normalization techniques Pandas, NumPy, Dask, Polars Regular expressions, JSON/XML parsing, web scraping (e.g., BeautifulSoup, Scrapy) Machine Learning & Modeling Scikit-learn, XGBoost, LightGBM Proficiency in supervised/unsupervised learning, clustering, classification, regression Familiarity with LLM workflows and tools like Hugging Face Transformers, LangChain (a plus) Visualization & Reporting Power BI, Tableau, Looker Python libraries: Matplotlib, Seaborn, Plotly, Altair Dashboarding tools: Streamlit, Dash Storytelling with data and stakeholder-ready reporting Cloud & Collaboration Tools Google Cloud Platform (BigQuery, Vertex AI), Microsoft Azure Git/GitHub, Jupyter Notebooks, VS Code Experience with APIs and data integration tools (e.g., Airflow, dbt) Ideal Candidate You are a curious and collaborative analyst who believes in the power of data to drive social change. You’re excited to work with cutting-edge tools while staying grounded in the real-world needs of communities and stakeholders. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-253034
Posted 1 week ago
2.0 years
0 Lacs
Khambhalia, Gujarat, India
On-site
JOB DESCRIPTION Job Description for Field Officer Operations-CPP Document Number NAYA-HR-OPSCPP-JD-17 Version / Revision 01 / 03 Date 06 / DEC / 2024 Prepared by System Coordinator Reviewed by Area Manager Approved by HOD JD reviewed as part of the HR Process Effectiveness Project in December 2022 AMENDMENT DETAILS Amendment Discard Insert Notes on Amendments No Date dd.mm.yyyy Details RevNo. Details Rev No. 1 20.08.2018 Old Procedure 00 Revised Procedure NAYA-HR-OPS-JD-12 01.00 Organizational Change Requirement 2 23.09.2020 Revised Procedure NAYA-HR-OPS-JD-12 01.00 NAYA-HR-OPSCPP-JD-17 01.01 Change in document number 3 22.06.2022 NAYA-HR-OPSCPP-JD-17 01.01 NAYA-HR-OPSCPP-JD-17 01.02 Update in Skills & Knowledge 4 06.12.2024 NAYA-HR-OPSCPP-JD-17 01.02 NAYA-HR-OPSCPP-JD-17 01.03 Org Structure Realignment JOB PURPOSE Responsible for operation of plant from field in co-ordination with panel officer during shift operation. Performing isolation and normalization of equipments, assisting maintenance functions, report field abnormalities, etc with due safety in place. Ensure that equipment is isolated and normalized as per SOP. ORGANISATIONAL CHART RESPONSIBILITIES ACCOUNTABILITIES ACTIVITIES 1 All Equipment availability especially Coal and oil fired Boilers, STGs & GTGs > 95% Ensure all equipment available Ensure hot standby availability All field abnormalities is to be discussed with Shift Incharge. Communication to Panel Officer and tracking closure of all abnormalities. 2 Co-ordination with Panel Officer and Shift Incharge all field related issues. Ensure all equipment available Ensure proper isolation of equipment and system prior to any maintenance job and signing of work permit. Check all field related critical parameters and keep a close check that they are under normal operating range. 3 Emergency Handling Ensure Emergency handling system healthiness Know the responsibility of emergency coordination and its position during the time of emergency 4 Ensure Optimum performance of all equipments Ensure that all equipment are running below their design parameter & within the range of normal operation. schedule change over of equipments as demanded by panel officer Spare and tool availability in night shifts Proper isolation of the system and check of the same prior to handing over to maintenance. 5 HSEF related activities for the plant HSE related compliance like: Near miss, First aid box training and updating, PPEs, SOP, etc 100% compliance of PPEs. Ensure that all technicians and contractor labour work in safer environment with proper tools Knowledge of Hazardous operation and its consequence should know prior to any job Be a part of near miss/ substandard condition reporting & participate in safety programs 6 System, Policy & Procedure Strictly adhere to identified HSE policy. Ensure that all field related parameter are within design range during operation Ensure proper housekeeping of plant and restoration of site condition after any maintenance activity Ensure all operational related documents availability at field 7 Provide Training Give training to operate to technician in shifts Training to operate through local panels Training on hazardous operation to technicians KEY CHALLENGES Comply the job within time without bypassing any safety KEY DECISIONS Made by Jobholder: Isolation and line-up of equipment's as schedule by panel officer Give priority to job affecting operation and emergency situation reacting. Allotting tech for non critical job during simultaneous critical operation as per Risk associated Recommendations to superior: Critical abnormality communication. INTERACTIONS Internal Interactions Direct communication to panel officer about field condition Direct communication to inter departmental shift engineer for necessary action External Interactions . DIMENSIONS Financial Dimensions: Improve plant reliability and availability. Other Dimensions: Improve technician on operational, Health and safety related points. Grooming technicians to take up the responsibilities and work during shift hours. Suggestion and modification call as per the plant improvement and safety. Reporting Near miss and other HSE related job thus maintaining excellent track record for safety. Compliance for all safety related standards. Team Size: Direct Reports:-Nil Indirect Reports :- SKILLS & KNOWLEDGE Educational Qualifications & Certifications: BE/ B.Tech Mechanical/Power Diploma Engineer – Mechanical or NPTI Diploma engineer with minimum of 2 year experience in thermal or combine cycle cogeneration power plant. BE / B.Tech (Mechanical / Power) with minimum 1 years of experience in thermal or combine cycle cogeneration power plant. Relevant (Functional/Level) & Total Years of Experience: Functional Skills: Knowledge of plant operation with good command on all the plant process parameters Knowledge of equipment isolation and line up for operation Knowledge of HSE related issue in field job Behavioural Skills: Good communication skill Team building and motivation RESPONSIBILITIES RESPONSIBILITIES ACCOUNTABILITIES ACTIVITIES 1 All Equipment availability especially Coal and oil fired Boilers, STGs & GTGs > 95% Ensure all equipment available Ensure hot standby availability All field abnormalities is to be discussed with Shift Incharge. Communication to Panel Officer and tracking closure of all abnormalities. 2 Co-ordination with Panel Officer and Shift Incharge all field related issues. Ensure all equipment available Ensure proper isolation of equipment and system prior to any maintenance job and signing of work permit. Check all field related critical parameters and keep a close check that they are under normal operating range. 3 Emergency Handling Ensure Emergency handling system healthiness Know the responsibility of emergency coordination and its position during the time of emergency 4 Ensure Optimum performance of all equipments Ensure that all equipment are running below their design parameter & within the range of normal operation. schedule change over of equipments as demanded by panel officer Spare and tool availability in night shifts Proper isolation of the system and check of the same prior to handing over to maintenance. 5 HSEF related activities for the plant HSE related compliance like: Near miss, First aid box training and updating, PPEs, SOP, etc 100% compliance of PPEs. Ensure that all technicians and contractor labour work in safer environment with proper tools Knowledge of Hazardous operation and its consequence should know prior to any job Be a part of near miss/ substandard condition reporting & participate in safety programs 6 System, Policy & Procedure Strictly adhere to identified HSE policy. Ensure that all field related parameter are within design range during operation Ensure proper housekeeping of plant and restoration of site condition after any maintenance activity Ensure all operational related documents availability at field 7 Provide Training Give training to operate to technician in shifts Training to operate through local panels Training on hazardous operation to technicians QUALIFICATIONS Educational Qualifications & Certifications: BE/ B.Tech in Mechanical / Power Diploma Engineer – Mechanical or NPTI
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Ciklum is looking for a Data Science Engineer to join our team full-time in India. We are a custom product engineering company that supports both multinational organizations and scaling startups to solve their most complex business challenges. With a global team of over 4,000 highly skilled developers, consultants, analysts and product owners, we engineer technology that redefines industries and shapes the way people live. About the role: As a Data Science Engineer, become a part of a cross-functional development team engineering experiences of tomorrow. The prospective team you will be working with is responsible for the design, development, and deployment of innovative, enterprise technology, tools, and standard processes to support the delivery of tax services. The team focuses onthe ability to deliver comprehensive, value-added, and efficient tax services to our clients. It is a dynamic team with professionals of varying backgrounds from tax technical, technology development, change management, and project management. The team consults and executes on a wide range of initiatives involving process and tool development and implementation including training development, engagement management, tool design, and implementation. Responsibilities: Collaborate with engineers, data scientists, and business analysts to understand requirements, refine models, and integrate LLMs into AI solutions Incorporate RLHF and advanced techniques for tax-specific AI outputs Embed generative AI solutions into consolidation, reconciliation, and reporting processes Leverage LLMs to interpret unstructured tax documentation Development and implementation of Deep learning algorithms for AI solutions Stay updated with recent trends in GENAI and apply the latest research and techniques to projects Preprocess raw data, including text normalization, tokenization, and other techniques, to make it suitable for use with NLP models Setup and train large language models and other state-of-the-art neural networks Conduct thorough testing and validation to ensure accuracy and reliability of model implementations Perform statistical analysis of results and optimize model performance for various computational environments, including cloud and edge computing platforms Explore and propose innovative AI use cases to enhance tax functions Partner with tax, finance, and IT teams to integrate AI workflows Collaborate with legal teams to meet regulatory standards for tax data Perform model audits to identify and mitigate risks Monitor and optimize generative models for performance and scalability Requirements: Solid understanding of object-oriented design patterns, concurrency/multithreading, and scalable AI and GenAI model deployment Strong programming skills in Python, PyTorch, TensorFlow, and related libraries Proficiency in RegEx, Spacy, NLTK, and NLP techniques for text representation and semantic extraction Hands-on experience in developing, training, and fine-tuning LLMs and AI models Practical understanding and experience in implementing techniques like CNN, RNN, GANs, RAG, Langchain, and Transformers Expertise in Prompt Engineering techniques and various vector databases Familiarity with Azure Cloud Computing Platform Experience with Docker, Kubernetes, CI/CD pipelines Experience with Deep learning, Computer Vision, CNN, RNN, LSTM Experience with Vector Databases (Milvus, Postgres) What`s in it for you? Strong community: Work alongside top professionals in a friendly, open-door environment Growth focus: Take on large-scale projects with a global impact and expand your expertise Tailored learning: Boost your skills with internal events (meetups, conferences, workshops), Udemy access, language courses, and company-paid certifications Endless opportunities: Explore diverse domains through internal mobility, finding the best fit to gain hands-on experience with cutting-edge technologies Care: We’ve got you covered with company-paid medical insurance, mental health support, and financial & legal consultations About us: At Ciklum, we are always exploring innovations, empowering each other to achieve more, and engineering solutions that matter. With us, you’ll work with cutting-edge technologies, contribute to impactful projects, and be part of a One Team culture that values collaboration and progress. India is a strategic innovation hub for Ciklum, with growing teams in Chennai and Pune leading advancements in EdgeTech, AR/VR, IoT, and beyond. Join us to collaborate on game-changing solutions and take your career to the next level. Want to learn more about us? Follow us on Instagram , Facebook , LinkedIn . Explore, empower, engineer with Ciklum! Interested already? We would love to get to know you! Submit your application. We can’t wait to see you at Ciklum.
Posted 1 week ago
7.0 - 9.0 years
0 Lacs
Hyderābād
On-site
Project Role : Software Development Lead Project Role Description : Develop and configure software systems either end-to-end or for a specific stage of product lifecycle. Apply knowledge of technologies, applications, methodologies, processes and tools to support a client, project or entity. Must have skills : Automotive ECU Software Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : Bachelor's degree in Computer Science Engineering or a related field (or equivalent experience). Summary: The algorithm and software engineering team is seeking a highly motivated and talented Algorithm Development engineer to join our growing team. You will play a critical role in designing, developing, and implementing cutting-edge algorithms for automotive control systems Roles & Responsibilities: 1. Develop and implement advanced algorithms using MATLAB, Simulink and Python, focusing on areas such as vehicle dynamics control and ADAS functionalities. 2. Apply knowledge of control theory, including state space analysis, feedback control systems, and optimal control techniques, to algorithm design. 3. Implement Kalman filtering techniques for sensor data fusion and state estimation within control algorithms. 4. Collaborate with engineers and other stakeholders to understand system requirements, define algorithm specifications, and conduct performance evaluations. 5. Participate in code reviews and provide constructive feedback to ensure code quality and adherence to best practices. 6. Document algorithms clearly and concisely, including design rationale, assumptions, and limitations. 7. Stay up-to-date on the latest advancements in control theory, vehicle dynamics, and relevant automotive technologies. Professional & Technical Skills: 1. MS in Engineering (Control Systems, Mechanical Engineering, or related field) 2. 7-9 years of relevant experience. 3. Strong experience in developing algorithms using MATLAB/Simulink and Python. 4. Skilled in interpreting signal behavior in time and frequency domains to identify trends, anomalies, and system characteristics. 5. Experience in analytical and numerical vehicle dynamics simulations 6. Good understanding of signal processing and estimation 7. Deep understanding of tire dynamics 8. Experience in pre-processing data, including techniques like data cleaning, normalization, and feature engineering 9. Excellent analytical and problem-solving skills. 10. Strong written and verbal communication skills. 11. Ability to work independently and as part of a team. Additional Information: - The candidate should have minimum 7.5 years of experience in Automotive ECU Software. - This position is based at our Hyderabad office. - A 15 years full time education is required. Bachelor's degree in Computer Science Engineering or a related field (or equivalent experience).
Posted 1 week ago
8.0 years
3 - 8 Lacs
Noida
On-site
Join our Team About this opportunity: We are looking for an experienced ArcSight Solution Architect to lead the design, implementation, and optimization of ArcSight-based security solutions. The ideal candidate will have deep expertise in SIEM (Security Information and Event Management), with hands-on experience in ArcSight architecture, deployment, and integration with various log sources and security tools. The role also includes close collaboration with cloud engineering, security operations, and compliance teams to ensure end-to-end security visibility across the GCP environment. What will you do: Analyse and understand new log source formats (syslog, flat files, APIs, JSON etc.). Design and develop custom Flex Connectors, including support for JSON and non-standard log formats & deploy ArcSight Flex Connectors for custom log source integration. Lead parser creation and tuning for various log sources and security technologies. Collaborate with the SOC and threat intel teams to build detection use cases and correlation rules aligned with MITRE ATT&CK. Integrate ArcSight with SOAR platforms for automated response, leveraging Python scripting. Conduct feasibility analysis for new integrations and support parser deployment lifecycle. Review parser performance, log quality, EPS optimization, and correlation tuning. Document architecture, parser specifications, playbooks, and integration workflows. Lead implementation projects, including installation, configuration, and tuning of ArcSight ESM, Logger, and Smart Connectors. Work closely with security operations and infrastructure teams to integrate log sources and develop use cases. Perform infrastructure sizing, health checks, and system performance tuning. Develop and maintain documentation including solution design, implementation guides, and SOPs. Provide subject matter expertise during POCs, and implementation support. The skills you bring: Bachelor in CS/IT or similar 8+ years of experience in cybersecurity with at least 4+ years in ArcSight solution design and deployment. Familiarity with regular expressions (regex) for parsing custom logs. Experience with log onboarding, parsing, and normalization processes. Log analysis (Analyst) Understanding of cloud environment (GCP) & Kubernetes & docker technologies Integration of different types of log sources Solid understanding of - CEF (Common Event Format) ,ArcSight Event Schema and Field Mapping, Device/Product Event Categorization Knowledge of Linux/Unix systems and basic scripting. Experience with ArcSight content development: rules, correlation, dashboards, reports. And familiarity with ArcSight upgrades and migration planning. Strong understanding of log management, threat detection, and SOC workflows. Knowledge of related tools and platforms such as SIEM, SOAR, firewalls, IDS/IPS, endpoint security. Scripting knowledge (e.g., Python, Shell) for automation and data parsing. Excellent communication and stakeholder management skills. Architect and implement end-to-end SIEM solutions using ArcSight 24* (ESM, SmartConnectors, Thub, Recon). Hands-on experience in leading parser development, customization, and tuning for various log sources and third-party security technologies. Integrate ArcSight with SOAR platforms for automated response, leveraging Python scripting. Skilled in performing feasibility analysis and POCs for new log source integrations and managing the complete parser deployment lifecycle. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Req ID: 770473
Posted 1 week ago
0 years
0 Lacs
Noida
On-site
Education Preference- Only B.tech CS/IT Graduate from the batch of 2025/2024 Job Description- Key Responsibilities: Collect and curate large-scale botanical image datasets, including scraping from online sources and organizing local image repositories with proper structure and naming conventions. Manually annotate plant images with precision (leaf, flower, fruit, stem, etc.) using tools like CVAT, LabelImg, or Labelme, while ensuring consistency and reviewing for quality. Assist in the pre-processing of image data (e.g., resizing, filtering, normalization, and augmentation) and monitor automated pipelines for correctness and completeness. Help validate annotations and perform sanity checks through basic machine learning inference (e.g., verifying predictions from classification or segmentation models). Collaborate closely with ML teams to flag edge cases, improve annotation guidelines, and maintain high data hygiene throughout the pipeline. Requirements: Prior experience handling large-scale image datasets (ideally in the terabyte range) and a strong understanding of digital image formats and metadata handling. Knowledge of plant/botanical structures and ability to visually distinguish between different parts (e.g., leaf vs. flower), preferably with academic or project exposure. Proficiency in basic Python scripting — familiarity with os, pandas, opencv, Pillow and ability to automate repetitive data handling tasks. Understanding of image annotation workflows and tools (CVAT, LabelImg, Labelme), plus familiarity with pre-processing and augmentation techniques commonly used in computer vision. Basic exposure to machine learning concepts such as image classification, segmentation, and inference, along with the discipline to carry out repetitive annotation tasks with high accuracy. Job Types: Full-time, Internship Contract length: 3 months Pay: ₹7,000.00 per month Benefits: Paid sick time Paid time off Schedule: Day shift Fixed shift Monday to Friday Location: Noida, Uttar Pradesh (Required) Work Location: In person Application Deadline: 26/07/2025
Posted 1 week ago
1.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
While technology is the heart of our business, a global and diverse culture is the heart of our success. We love our people and we take pride in catering them to a culture built on transparency, diversity, integrity, learning and growth. If working in an environment that encourages you to innovate and excel, not just in professional but personal life, interests you- you would enjoy your career with Quantiphi! As a Google SecOps SIEM Engineer , you will be responsible for strategic delivery helping our customers securely adopt Google SecOps. You will provide best practices on secure build of Google SecOps platform, foundational cloud implementation for Google SecOps, tackle difficult problems that businesses are facing when building Google SecOps, and more. You will provide prescriptive guidance in ensuring customers receive the best of what Google SecOps can offer and you will ensure that customers have the best experience in migrating, building, modernising, and maintaining Google SecOps. Additionally, you will work closely with Product Management and Product Engineering to drive excellence of Google SecOps and features. Lead the design and implementation of Google SecOps data ingestion from diverse sources, various mechanisms for integration and normalization of logs. Extension of pre-built UDMs in Google SecOps and creation of custom parsers where required for log sources. Integration of Google SecOps SIEM with other security capabilities and tools such as SOAR, EDR, NDR, threat intelligence platform, and ticketing systems. Write custom actions, scripts and/or integrations to extend SIEM platform functionality. Monitor performance and perform timely actions to scale SIEM deployment, especially in a very high-volume security environment. Creation of SIEM assets such as: detection rules using YARA-L, dashboards, parsers etc. Migration of existing assets from existing customer’s SIEM/SOAR to SecOps and assisting in implementing the SIEM/SOAR phase-out, phase-in approach. Testing and deployment of newly created and migrated assets such as rules, playbooks, alerts, dashbords etc. Design and implement solutions to handle alert fatigue encountered in SIEM correlation. Creation of custom SIEM dashboards to meet customer requirements. Guide on building or maturing cloud security programs and the implementation of tools and approaches used for improving cloud security. Debug and solve customer issues in ingestion, parsing, normalization of data etc. Develop SOAR playbooks to provide case handling and Incident response as per triage needs. Minimum Qualification Bachelor’s degree in Computer Science, Engineering or related technical field or equivalent practical experience. Google SecOps SIEM experience in the areas of responsibility for at least 1 year. Implementation experience of YARA-L 2.0 and at least one more general purpose language. Experience managing customer projects to completion, working with engineering teams, sales and partners. Experience architecting, developing, or maintaining SIEM and SOAR platforms & secure Cloud solutions. Strong verbal and written communication skills and the ability to develop high-quality 8+ years experience in leading projects and delivering technical solutions related to security Demonstrated experience on consulting or ownership of Security during high-speed environment migration for large-scale businesses with regulatory requirements Strong verbal and written communication skills (English), and the ability to develop high-quality technical documentation and presentation materials. Ability to be located in Mumbai, India for at least 1 year Good To Have Experience in Prevention, Detection and response to cyber threats Google SecOps SOAR experience of 1 year in creation of playbooks, testing and validation of playbooks, integration with custom actions using bespoke scripts, or other SOAR platforms Knowledge and experience in SIEM platforms Knowledge in GCP, including Google Cloud Professional Certifications (Security, Architect) and other industry certifications (CISSP, CCSP etc) Experience in security governance, security risk management, security operations, security architecture, and/or cyber incident response programs for cloud. Experience working with cloud architecture across a broad set of enterprise use cases and creating end-to-end solution architectures. Excellent organizational, problem-solving, articulating and influencing skills. Experience with industry compliance frameworks (e.g., PCI-DSS, ISO 27017/27018, GDPR, SOC). If you like wild growth and working with happy, enthusiastic over-achievers, you'll enjoy your career with us !
Posted 1 week ago
1.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
GOOGLE SecOps SIEM Engineer (INDIA, Mumbai) Job Title SecOps SIEM Engineer, Google Cloud Professional Services Role Description (About The Job) As a Google SecOps SIEM Engineer, you will be responsible for strategic delivery helping our customers securely adopt Google SecOps. You will provide best practices on secure build of Google SecOps platform, foundational cloud implementation for Google SecOps, tackle difficult problems that businesses are facing when building Google SecOps, and more. You will provide prescriptive guidance in ensuring customers receive the best of what Google SecOps can offer and you will ensure that customers have the best experience in migrating, building, modernising, and maintaining Google SecOps. Additionally, you will work closely with Product Management and Product Engineering to drive excellence of Google SecOps and features. Responsibilities Lead the design and implementation of Google SecOps data ingestion from diverse sources, various mechanisms for integration and normalization of logs. Extension of pre-built UDMs in Google SecOps and creation of custom parsers where required for log sources. Integration of Google SecOps SIEM with other security capabilities and tools such as SOAR, EDR, NDR, threat intelligence platform, and ticketing systems. Write custom actions, scripts and/or integrations to extend SIEM platform functionality. Monitor performance and perform timely actions to scale SIEM deployment, especially in a very high-volume security environment. Creation of SIEM assets such as: detection rules using YARA-L, dashboards, parsers etc. Migration of existing assets from existing customers SIEM/SOAR to SecOps and assisting in implementing the SIEM/SOAR phase-out, phase-in approach. Testing and deployment of newly created and migrated assets such as rules, playbooks, alerts, dashbords etc Design and implement solutions to handle alert fatigue encountered in SIEM correlation. Creation of custom SIEM dashboards to meet customer requirements. Guide on building or maturing cloud security programs and the implementation of tools and approaches used for improving cloud security. Debug and solve customer issues in ingestion, parsing, normalization of data etc Develop SOAR playbooks to provide case handling and Incident response as per triage needs Minimum Qualifications (MQs) Bachelors degree in Computer Science, Engineering or related technical field or equivalent practical experience. Google SecOps SIEM experience in the areas of responsibility for at least 1 year. Implementation experience of YARA-L 2.0 and at least one more general purpose language. Experience managing customer projects to completion, working with engineering teams, sales and partners. Experience architecting, developing, or maintaining SIEM and SOAR platforms & secure Cloud solutions. Strong verbal and written communication skills and the ability to develop high-quality 8+ years experience in leading projects and delivering technical solutions related to security Demonstrated experience on consulting or ownership of Security during high-speed environment migration for large-scale businesses with regulatory requirements Strong verbal and written communication skills (English), and the ability to develop high-quality technical documentation and presentation materials. Ability to be located in Mumbai, India for at least 1 year
Posted 1 week ago
0.0 years
0 Lacs
Noida, Uttar Pradesh
On-site
Education Preference- Only B.tech CS/IT Graduate from the batch of 2025/2024 Job Description- Key Responsibilities: Collect and curate large-scale botanical image datasets, including scraping from online sources and organizing local image repositories with proper structure and naming conventions. Manually annotate plant images with precision (leaf, flower, fruit, stem, etc.) using tools like CVAT, LabelImg, or Labelme, while ensuring consistency and reviewing for quality. Assist in the pre-processing of image data (e.g., resizing, filtering, normalization, and augmentation) and monitor automated pipelines for correctness and completeness. Help validate annotations and perform sanity checks through basic machine learning inference (e.g., verifying predictions from classification or segmentation models). Collaborate closely with ML teams to flag edge cases, improve annotation guidelines, and maintain high data hygiene throughout the pipeline. Requirements: Prior experience handling large-scale image datasets (ideally in the terabyte range) and a strong understanding of digital image formats and metadata handling. Knowledge of plant/botanical structures and ability to visually distinguish between different parts (e.g., leaf vs. flower), preferably with academic or project exposure. Proficiency in basic Python scripting — familiarity with os, pandas, opencv, Pillow and ability to automate repetitive data handling tasks. Understanding of image annotation workflows and tools (CVAT, LabelImg, Labelme), plus familiarity with pre-processing and augmentation techniques commonly used in computer vision. Basic exposure to machine learning concepts such as image classification, segmentation, and inference, along with the discipline to carry out repetitive annotation tasks with high accuracy. Job Types: Full-time, Internship Contract length: 3 months Pay: ₹7,000.00 per month Benefits: Paid sick time Paid time off Schedule: Day shift Fixed shift Monday to Friday Location: Noida, Uttar Pradesh (Required) Work Location: In person Application Deadline: 26/07/2025
Posted 1 week ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Knowledge: Looking for 10 Years. - Business Intelligence Concepts: Data warehousing, ETL processes, star/snowflake schemas. - Data Modeling: Basic understanding of relational databases, normalization, dimensional modeling. Skills: - Experience in End-to-End BI Solutions: From data ingestion to report publication – including visualizations. - DAX (Data Analysis Expressions): Syntax, functions, measures, calculated columns/tables etc. - SQL: Writing queries, joins, subqueries. Python is a plus - Data Cleaning & Transformation: Use Power Query and SQL for shaping data. - Interactive Dashboards: Create slicers, bookmarks, drill-downs, and tooltips. Abilities: - Analytical Thinking: Break down complex data problems and extract insights. - Attention to Detail: Ensure data accuracy and dashboard clarity. - Problem Solving: Resolve data quality, model design, or performance issues. - Adaptability: Quickly learn new tools, data sources, or business domains. - Project Management: Handle multiple BI projects and deadlines simultaneously. Need some one with PBI tool experience and SQL experience. Regards, Radhika Nadukula – Talent Acquisition Manager Email: rnadukula@wallstreetcs.com Mobile: +91 8179584659 Wall Street Consulting Services Website: www.wallstreetcs.com
Posted 1 week ago
5.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Large Language Models Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an AI/ML Engineer, you will develop applications and systems utilizing AI tools, Cloud AI services, and GenAI models. Your role involves implementing deep learning, neural networks, chatbots, and image processing in production-ready quality solutions. Roles & Responsibilities: - Expected to be an SME, collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Lead the implementation of large language models in AI applications. - Research and apply cutting-edge AI techniques like deep learning and neural networks. - Design and develop AI solutions using cloud AI services. - Collaborate with cross-functional teams to integrate AI technologies into various applications. Professional & Technical Skills: - Must To Have Skills: Proficiency in Large Language Models. - Strong understanding of statistical analysis and machine learning algorithms. - Experience with data visualization tools such as Tableau or Power BI. - Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. - Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information: - The candidate should have a minimum of 5 years of experience in Large Language Models. - This position is based at our Bengaluru office. - A 15 years full time education is required.
Posted 1 week ago
5.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Large Language Models Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an AI/ML Engineer, you will develop applications and systems utilizing AI tools, Cloud AI services, and GenAI models. Your role involves implementing deep learning, neural networks, chatbots, and image processing in production-ready quality solutions. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work-related problems. - Develop applications and systems using AI tools and Cloud AI services. - Implement deep learning and neural networks in solutions. - Create chatbots and work on image processing tasks. - Collaborate with team members to provide innovative solutions. - Stay updated with the latest AI/ML trends and technologies. Professional & Technical Skills: - Must To Have Skills: Proficiency in Large Language Models. - Strong understanding of statistical analysis and machine learning algorithms. - Experience with data visualization tools such as Tableau or Power BI. - Hands-on implementing various machine learning algorithms like linear regression, logistic regression, decision trees, and clustering algorithms. - Solid grasp of data munging techniques including data cleaning, transformation, and normalization. Additional Information: - The candidate should have a minimum of 3 years of experience in Large Language Models. - This position is based at our Bengaluru office. - A 15 years full-time education is required.
Posted 1 week ago
0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
Job Description The ideal candidate must possess strong communication skills, with an ability to listen and comprehend information and share it with all the key stakeholders, highlighting opportunities for improvement and concerns, if any. He/she must be able to work collaboratively with teams to execute tasks within defined timeframes while maintaining high-quality standards and superior service levels. The ability to take proactive actions and willingness to take up responsibility beyond the assigned work area is a plus. Roles and responsibilities: Data enrichment/gap fill, standardization, normalization, and categorization of online and offline product data via research through different sources like internet, specific websites, database, etc. Data quality check and correction Data profiling and reporting (basic) Email communication with the client on request acknowledgment, project status and response on queries Help customers in enhancing their product data quality (electrical, mechanical, electronics) from the technical specification and description perspective Provide technical consulting to the customer category managers around the industry best practices of product data enhancement
Posted 1 week ago
0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
Job Description The candidate will also be working on consultancy projects on redesigning e-commerce customer’s website taxonomy and navigation. The ideal candidate must possess strong communication skills, with an ability to listen and comprehend information and share it with all the key stakeholders, highlighting opportunities for improvement and concerns, if any. He/she must be able to work collaboratively with teams to execute tasks within defined timeframes while maintaining high-quality standards and superior service levels. The ability to take proactive actions and willingness to take up responsibility beyond the assigned work area is a plus. Apprentice_Analyst Roles and responsibilities: Data enrichment/gap fill, standardization, normalization, and categorization of online and offline product data via research through different sources like internet, specific websites, database, etc. Data quality check and correction Data profiling and reporting (basic) Email communication with the client on request acknowledgment, project status and response on queries Help customers in enhancing their product data quality (electrical, mechanical, electronics) from the technical specification and description perspective Provide technical consulting to the customer category managers around the industry best practices of product data enhancement Technical and Functional Skills: Bachelor’s Degree in Engineering from Electrical, Mechanical OR Electronics stream Excellent technical knowledge of engineering products (Pumps, motors, HVAC, Plumbing, etc.) and technical specifications Intermediate knowledge of MS Office/Internet.
Posted 1 week ago
2.0 - 3.0 years
5 - 5 Lacs
Thiruvananthapuram
Work from Office
Overview: We are looking for a skilled SIEM Administrator to manage and maintain Security Information and Event Management (SIEM) solutions such as Innspark , LogRhythm , or similar tools. This role is critical to ensuring effective security monitoring, log management, and event analysis across our systems. Key Responsibilities: Design, deploy, and manage SIEM tools (e.g., Innspark, LogRhythm, Splunk). Develop and maintain correlation rules, s, dashboards, and reports. Integrate logs from servers, network devices, cloud services, and applications. Troubleshoot log collection, parsing, normalization, and event correlation issues. Work with security teams to improve detection and response capabilities. Ensure SIEM configurations align with compliance and audit requirements. Perform routine SIEM maintenance (e.g., patching, upgrades, health checks). Create and maintain documentation for implementation, architecture, and operations. Participate in evaluating and testing new SIEM tools and features. Support incident response by providing relevant event data and insights. Required Qualifications: Bachelor's degree in Computer Science, Information Security, or related field. 3+ years of hands-on experience with SIEM tools. Experience with Innspark, LogRhythm, or other SIEM platforms (e.g., Splunk, QRadar, ArcSight). Strong knowledge of log management and event normalization. Good understanding of cybersecurity concepts and incident response. Familiarity with Windows/Linux OS and network protocols. Scripting knowledge (e.g., Python, PowerShell) is a plus. Strong troubleshooting, analytical, and communication skills. Industry certifications (CEH, Security+, SSCP, or vendor-specific) are a plus. Key Skills: SIEM Tools (Innspark, LogRhythm, Splunk) Troubleshooting Log Management & Analysis Scripting (optional) Security Monitoring Skills Siem,Splunk,Troubleshooting Required Skills Siem,Splunk,Troubleshooting
Posted 1 week ago
2.0 - 3.0 years
5 - 5 Lacs
Thiruvananthapuram
Work from Office
Overview: We are looking for a skilled SIEM Administrator to manage and maintain Security Information and Event Management (SIEM) solutions such as Innspark , LogRhythm , or similar tools. This role is critical to ensuring effective security monitoring, log management, and event analysis across our systems. Key Responsibilities: Design, deploy, and manage SIEM tools (e.g., Innspark, LogRhythm, Splunk). Develop and maintain correlation rules, s, dashboards, and reports. Integrate logs from servers, network devices, cloud services, and applications. Troubleshoot log collection, parsing, normalization, and event correlation issues. Work with security teams to improve detection and response capabilities. Ensure SIEM configurations align with compliance and audit requirements. Perform routine SIEM maintenance (e.g., patching, upgrades, health checks). Create and maintain documentation for implementation, architecture, and operations. Participate in evaluating and testing new SIEM tools and features. Support incident response by providing relevant event data and insights. Required Qualifications: Bachelor's degree in Computer Science, Information Security, or related field. 3+ years of hands-on experience with SIEM tools. Experience with Innspark, LogRhythm, or other SIEM platforms (e.g., Splunk, QRadar, ArcSight). Strong knowledge of log management and event normalization. Good understanding of cybersecurity concepts and incident response. Familiarity with Windows/Linux OS and network protocols. Scripting knowledge (e.g., Python, PowerShell) is a plus. Strong troubleshooting, analytical, and communication skills. Industry certifications (CEH, Security+, SSCP, or vendor-specific) are a plus. Key Skills: SIEM Tools (Innspark, LogRhythm, Splunk) Troubleshooting Log Management & Analysis Scripting (optional) Security Monitoring Skills Siem,Splunk,Troubleshooting Required Skills Siem,Splunk,Troubleshooting
Posted 1 week ago
0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
Job Description The ideal candidate must possess strong communication skills, with an ability to listen and comprehend information and share it with all the key stakeholders, highlighting opportunities for improvement and concerns, if any. He/she must be able to work collaboratively with teams to execute tasks within defined timeframes while maintaining high-quality standards and superior service levels. The ability to take proactive actions and willingness to take up responsibility beyond the assigned work area is a plus. Roles And Responsibilities Data enrichment/gap fill, standardization, normalization, and categorization of online and offline product data via research through different sources like internet, specific websites, database, etc. Data quality check and correction Data profiling and reporting (basic) Email communication with the client on request acknowledgment, project status and response on queries Help customers in enhancing their product data quality (electrical, mechanical, electronics) from the technical specification and description perspective Provide technical consulting to the customer category managers around the industry best practices of product data enhancement
Posted 1 week ago
0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
Job Description The candidate will also be working on consultancy projects on redesigning e-commerce customer’s website taxonomy and navigation. The ideal candidate must possess strong communication skills, with an ability to listen and comprehend information and share it with all the key stakeholders, highlighting opportunities for improvement and concerns, if any. He/she must be able to work collaboratively with teams to execute tasks within defined timeframes while maintaining high-quality standards and superior service levels. The ability to take proactive actions and willingness to take up responsibility beyond the assigned work area is a plus. Apprentice_Analyst Roles and responsibilities: Data enrichment/gap fill, standardization, normalization, and categorization of online and offline product data via research through different sources like internet, specific websites, database, etc. Data quality check and correction Data profiling and reporting (basic) Email communication with the client on request acknowledgment, project status and response on queries Help customers in enhancing their product data quality (electrical, mechanical, electronics) from the technical specification and description perspective Provide technical consulting to the customer category managers around the industry best practices of product data enhancement Technical And Functional Skills Bachelor’s Degree in Engineering from Electrical, Mechanical OR Electronics stream Excellent technical knowledge of engineering products (Pumps, motors, HVAC, Plumbing, etc.) and technical specifications Intermediate knowledge of MS Office/Internet.
Posted 1 week ago
10.0 years
5 - 10 Lacs
Hyderābād
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Leadership & Strategy Lead and mentor cross-functional teams focused on AI/ML incubation and engineering enablement Define and drive the strategic roadmap for innovation initiatives and technical enablement Foster a culture of experimentation, rapid iteration, and continuous learning Incubation & Innovation Identify, evaluate, and incubate high-impact AI/ML ideas aligned with business goals Oversee the development of proof-of-concepts (PoCs) and prototypes to validate new technologies and approaches Collaborate with product, research, and business stakeholders to prioritize and refine ideas Engineering Enablement Build and scale internal platforms, tools, and frameworks to accelerate AI/ML development Establish best practices, coding standards, and reusable components for AI/ML engineering Provide technical guidance and support to engineering teams adopting AI/ML technologies Collaboration & Communication Act as a bridge between research, engineering, and product teams to ensure alignment and knowledge transfer Present technical concepts and project outcomes to executive leadership and stakeholders Promote knowledge sharing through documentation, workshops, and internal communities of practice Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field 10+ years of experience in software engineering with 3+ years in management Experience in React, JavaScript, TypeScript, Node.js, React Native, Java, Python, and Spring Boot Frameworks Experience in Data Management of RDBMS and NoSQL Experience in building/managing CI/CD pipelines of Jenkins, Github Actions etc. Proven experience in AI/ML technologies, including model development, deployment, and MLOps Solid understanding of cloud platforms (e.g. Azure, GCP) with hands-on experience on AWS and Terraforms Proven excellent communication, collaboration, and organizational skills Preferred Qualifications: Experience with LLMs, generative AI, or reinforcement learning Experience with agile methodologies and startup-like environments Familiarity with open-source AI/ML tools and frameworks (e.g., TensorFlow, PyTorch, MLflow) Knowledge of Machine Learning Algorithms: Supervised, unsupervised, reinforcement learning Knowledge of Model Evaluation: Precision, recall, F1-score, ROC-AUC Knowledge of Data Preprocessing: Feature engineering, normalization, handling missing data Knowledge of Responsible AI: Fairness, explainability (XAI), bias mitigation Knowledge of AutoML: Hyperparameter tuning, model selection automation Knowledge of Federated Learning & Edge AI: For privacy-preserving and low-latency applications Deep Learning: CNNs, RNNs, Transformers, GANs Background in innovation labs, R&D, or technical incubation environments At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 1 week ago
0 years
0 Lacs
Telangana
On-site
Design, develop, and maintain SQL databases and database objects such as tables, views, indexes, stored procedures, and functions. Write complex SQL queries to extract, manipulate, and analyze data. Optimize database performance by analyzing query execution plans and making necessary adjustments. Ensure data integrity and security by implementing appropriate measures and protocols. Collaborate with software developers, analysts, and other stakeholders to understand data requirements and provide solutions. Perform data migrations and transformations as needed. Monitor database performance and troubleshoot issues as they arise. Create and maintain documentation related to database design, configuration, and processes. Participate in code reviews and provide feedback to team members. Stay updated with the latest developments in SQL and database technologies. Qualifications: Bachelor’s degree in Computer Science, Information Technology, or a related field. Proven experience as a SQL Developer, Database Administrator, or similar role. Strong proficiency in SQL and experience with database management systems such as MySQL, SQL Server, Oracle, or PostgreSQL. Familiarity with data warehousing concepts and tools. Experience with ETL (Extract, Transform, Load) processes and tools. Knowledge of programming languages such as Python, Java, or C# is a plus. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Skills: Advanced SQL querying and database design. Performance tuning and optimization. Data modeling and normalization. Understanding of database security and backup/recovery processes. Ability to work independently and as part of a team. Analytical mindset with the ability to interpret complex data sets. Design, develop, and maintain SQL databases and database objects such as tables, views, indexes, stored procedures, and functions. Write complex SQL queries to extract, manipulate, and analyze data. Optimize database performance by analyzing query execution plans and making necessary adjustments. Ensure data integrity and security by implementing appropriate measures and protocols. Collaborate with software developers, analysts, and other stakeholders to understand data requirements and provide solutions. Perform data migrations and transformations as needed. Monitor database performance and troubleshoot issues as they arise. Create and maintain documentation related to database design, configuration, and processes. Participate in code reviews and provide feedback to team members. Stay updated with the latest developments in SQL and database technologies. Qualifications: Bachelor’s degree in Computer Science, Information Technology, or a related field. Proven experience as a SQL Developer, Database Administrator, or similar role. Strong proficiency in SQL and experience with database management systems such as MySQL, SQL Server, Oracle, or PostgreSQL. Familiarity with data warehousing concepts and tools. Experience with ETL (Extract, Transform, Load) processes and tools. Knowledge of programming languages such as Python, Java, or C# is a plus. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Skills: Advanced SQL querying and database design. Performance tuning and optimization. Data modeling and normalization. Understanding of database security and backup/recovery processes. Ability to work independently and as part of a team. Analytical mindset with the ability to interpret complex data sets.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough