Job Title : Network Security and Infrastructure Engineer Location : Remote Job Summary We are seeking a skilled and detail-oriented Network Security and Infrastructure Engineer to join our IT security team. The ideal candidate will be responsible for designing, implementing, and managing secure network infrastructure solutions. Proficiency with AlgoSec for firewall and security policy management, and experience using HP Lighthouse for infrastructure monitoring and reporting, is essential. Key Responsibilities Design, implement, and manage secure network infrastructure to support business operations. Use AlgoSec to manage firewall policies, analyze rule changes, and ensure regulatory compliance. Perform security policy audits and recommend rule optimization and risk mitigation strategies. Leverage HP Lighthouse for system and network performance monitoring, capacity planning, and incident reporting. Maintain detailed network diagrams and documentation. Lead vulnerability assessments and implement remediation strategies. Work closely with DevOps, compliance, and application teams to secure cloud and on-premise environments. Evaluate and deploy security products and technologies as needed. Provide support during security incidents, ensuring proper documentation and post-incident analysis. Required Skills & Qualifications Bachelors degree in Computer Science, Information Security, or related field (or equivalent experience). 5+ years of experience in network security, infrastructure management, or related roles. Hands-on experience with AlgoSec Security Management Suite. Experience working with or familiarity with HP Lighthouse (or similar HP tools like HP Operations Manager, HP IMC). Strong knowledge of firewalls (Cisco, Palo Alto, Fortinet), IDS/IPS, VPNs, and network segmentation. Understanding of cloud platforms (AWS, Azure, GCP) and hybrid network security. Familiarity with regulatory compliance frameworks (e.g., PCI-DSS, HIPAA, ISO 27001). Strong analytical and troubleshooting skills. Excellent communication and documentation abilities. Preferred Qualifications Certifications : CCNP Security, CISSP, CISM, or AlgoSec Certified Professional. Experience with SIEM tools (e.g., Splunk, QRadar). Scripting knowledge on Python (ref:hirist.tech) Show more Show less
Job Description : Data Engineer Location : Bengaluru Experience : 3 - 5 years About The Role We are seeking a highly skilled Lead Data Engineer to join our dynamic team. The ideal candidate will have a strong background in data engineering technologies especially SQL, Databricks, Azure services and client facing experience. Key Responsibilities Technical Expertise : Hands-on experience with SQL, Databricks, pyspark, Python, Azure Cloud, and Power BI. Design, develop, and optimize pyspark workloads. Ability to write scalable, modular and reusable code in SQL, python and pyspark. Ability to communicate clearly with client stakeholders and ability to collaborate with cross-functional teams. Requirement gathering and ability to translate business requirements into technical specifications. Stakeholder Management Engage with stakeholders to gather and analyze requirements. Provide regular updates and reports on project status and progress. Ensure alignment of data solutions with business objectives. Shift Requirements Ability to work during US shift hours to coordinate with global teams. Qualifications Proficiency in SQL, Databricks, PySpark, Python, Azure Cloud and Power BI. Strong communication skills, both written and verbal. Proven ability to work effectively with global stakeholders. Strong problem-solving skills and attention to detail. (ref:hirist.tech) Show more Show less
About Akira AKIRA is a bootstrapped Data & Analytics Services company driven by technology excellence, consultative practices and focus on building technology agnostic customer solutions helping organizations realize business value through data. We have grown over 3 times in the past 2 years across business, headcount and number of clients we serve once we started focusing on growing up. AKIRA is a growth engine, where we provide great opportunities to all our team members to extract as much learning and satisfaction for the work they deliver. We will double ourselves in the coming 6-9 months and want to be a great place to work. Job Title : AI/ML Engineer (Data Scientist) Location : Bangalore Job Description We are an innovative technology company committed to driving business transformation through AI and Machine Learning solutions. We specialize in solving complex business challenges by leveraging cutting-edge AI/ML technologies, delivering scalable, reliable, and maintainable solutions for our clients. We are looking for a skilled AI/ML Engineer to join our dynamic team and play a crucial role in building production-grade AI/ML pipelines and solutions. Responsibilities As an AI/ML Engineer, you will be responsible for designing, developing, and deploying AI and ML models to address complex business challenges. You will collaborate with cross-functional teams, including Data Engineers and Data Scientists, to build scalable, production-grade AI/ML pipelines. Your work will ensure the deployed models are reliable, maintainable, and optimized for performance in real-world : Design, Develop, and Deploy AI/ML Models : Address complex business challenges through AI and ML solutions, ensuring production-grade deployments Build Scalable AI/ML Pipelines : Work on data preprocessing, feature engineering, model training, validation, deployment, and ongoing monitoring of AI/ML models Collaboration with Cross-functional Teams : Collaborate with Data Engineers, Data Scientists, and other stakeholders to implement end-to-end AI/ML solutions Optimize AI/ML Models in Production : Ensure that deployed models are reliable, scalable, and continuously optimized for high performance in production environments Develop APIs and Microservices : Build APIs or microservices to integrate AI/ML models into business solutions, making them usable across different platforms Implement MLOps Practices : Optimize workflows using CI/CD pipelines, automated retraining of models, version control, and monitoring Apply State-of-the-art AI/ML Techniques : Utilize deep learning, NLP, computer vision, and time-series analysis to develop innovative solutions Ensure Ethical AI Practices : Adhere to data privacy, security, and ethical AI standards throughout the entire lifecycle of AI/ML models Stay Up-to-date with Industry Trends : Continuously explore and implement the latest AI/ML techniques, tools, and best practices Assist in Documentation & Best Practices : Help create and maintain standards, workflows, and documentation for data technology and AI/ML projects Required Skills & Qualifications Strong Expertise in AI/ML Frameworks : Proficiency with TensorFlow, PyTorch, or Scikit-learn for building and deploying models Proficiency in Programming Languages : Strong command of Python or R, including libraries/tools for data science and machine learning Experience with Cloud-based AI/ML Services : Hands-on experience with Microsoft Azure and Databricks MLOps Experience : Expertise in implementing automated pipelines, model versioning, monitoring, and lifecycle management Domain-Specific AI Expertise : Deep knowledge in NLP, computer vision, predictive analytics, or other specialized AI techniques Excellent Problem-Solving Skills : Ability to identify and address technical challenges with innovative and efficient solutions Effective Communication Skills : Strong ability to convey complex technical concepts to both technical and non-technical audiences Attention to Detail : Ability to handle and execute highly specialized projects with a focus on quality Desired Skills & Qualifications Familiarity with Distributed Computing : Knowledge of Apache Spark and scalable training techniques for large datasets Experience with GenAI : Hands-on experience in developing enterprise-level Gen AI applications and models for business use cases Knowledge of Data Privacy & Security Standards : Understanding of ethical AI principles and ensuring compliance with relevant privacy and security standards Interpersonal Skills : Excellent collaboration and relationship-building skills with colleagues, stakeholders, and external partners (ref:hirist.tech) Show more Show less
Job Title : Sr Analyst Power BI Experience : 3+ Years Location : Bengaluru Job Description We are seeking a skilled Power BI Developer with 3+ years of experience to join our team. The ideal candidate will develop and manage BI solutions, create insightful data visualizations, and enable business stakeholders to make data-driven decisions. Key Responsibilities Develop, publish, and schedule Power BI reports and dashboards. Collaborate with business stakeholders to gather requirements and deliver data insights. Optimize existing Power BI reports and troubleshoot performance issues. Integrate Power BI with various data sources. Key Requirements 3+ years of experience with Power BI and related tools. Proficiency in DAX, Power Query, and SQL. Strong understanding of data modeling and report optimization. Experience working with large datasets and integrating multiple data sources. Excellent analytical and communication skills. About Akira Insights Akira Insights is a boutique pure-play Data & Analytics consulting firm providing services to multinational corporations, large enterprises, and Fortune 500s across various sectors, helping businesses unlock valuable insights from their data. What We Offer Competitive salary and performance-based incentives Opportunities for professional growth and development (ref:hirist.tech) Show more Show less
Job Description : Data Engineer Location : Bengaluru Experience : 3 - 5 years About The Role We are seeking a highly skilled Lead Data Engineer to join our dynamic team. The ideal candidate will have a strong background in data engineering technologies especially SQL, Databricks, Azure services and client facing experience. Key Responsibilities Technical Expertise : Hands-on experience with SQL, Databricks, pyspark, Python, Azure Cloud, and Power BI. Design, develop, and optimize pyspark workloads. Ability to write scalable, modular and reusable code in SQL, python and pyspark. Ability to communicate clearly with client stakeholders and ability to collaborate with cross-functional teams. Requirement gathering and ability to translate business requirements into technical specifications. Stakeholder Management Engage with stakeholders to gather and analyze requirements. Provide regular updates and reports on project status and progress. Ensure alignment of data solutions with business objectives. Shift Requirements Ability to work during US shift hours to coordinate with global teams. Qualifications Proficiency in SQL, Databricks, PySpark, Python, Azure Cloud and Power BI. Strong communication skills, both written and verbal. Proven ability to work effectively with global stakeholders. Strong problem-solving skills and attention to detail. (ref:hirist.tech) Show more Show less
About Akira AKIRA is a bootstrapped Data & Analytics Services company driven by technology excellence, consultative practices and focus on building technology agnostic customer solutions helping organizations realize business value through data. We have grown over 3 times in the past 2 years across business, headcount and number of clients we serve once we started focusing on growing up. AKIRA is a growth engine, where we provide great opportunities to all our team members to extract as much learning and satisfaction for the work they deliver. We will double ourselves in the coming 6-9 months and want to be a great place to work. Job Title : AI/ML Engineer (Data Scientist) Location : Bangalore Job Description We are an innovative technology company committed to driving business transformation through AI and Machine Learning solutions. We specialize in solving complex business challenges by leveraging cutting-edge AI/ML technologies, delivering scalable, reliable, and maintainable solutions for our clients. We are looking for a skilled AI/ML Engineer to join our dynamic team and play a crucial role in building production-grade AI/ML pipelines and solutions. Responsibilities As an AI/ML Engineer, you will be responsible for designing, developing, and deploying AI and ML models to address complex business challenges. You will collaborate with cross-functional teams, including Data Engineers and Data Scientists, to build scalable, production-grade AI/ML pipelines. Your work will ensure the deployed models are reliable, maintainable, and optimized for performance in real-world : Design, Develop, and Deploy AI/ML Models : Address complex business challenges through AI and ML solutions, ensuring production-grade deployments Build Scalable AI/ML Pipelines : Work on data preprocessing, feature engineering, model training, validation, deployment, and ongoing monitoring of AI/ML models Collaboration with Cross-functional Teams : Collaborate with Data Engineers, Data Scientists, and other stakeholders to implement end-to-end AI/ML solutions Optimize AI/ML Models in Production : Ensure that deployed models are reliable, scalable, and continuously optimized for high performance in production environments Develop APIs and Microservices : Build APIs or microservices to integrate AI/ML models into business solutions, making them usable across different platforms Implement MLOps Practices : Optimize workflows using CI/CD pipelines, automated retraining of models, version control, and monitoring Apply State-of-the-art AI/ML Techniques : Utilize deep learning, NLP, computer vision, and time-series analysis to develop innovative solutions Ensure Ethical AI Practices : Adhere to data privacy, security, and ethical AI standards throughout the entire lifecycle of AI/ML models Stay Up-to-date with Industry Trends : Continuously explore and implement the latest AI/ML techniques, tools, and best practices Assist in Documentation & Best Practices : Help create and maintain standards, workflows, and documentation for data technology and AI/ML projects Required Skills & Qualifications Strong Expertise in AI/ML Frameworks : Proficiency with TensorFlow, PyTorch, or Scikit-learn for building and deploying models Proficiency in Programming Languages : Strong command of Python or R, including libraries/tools for data science and machine learning Experience with Cloud-based AI/ML Services : Hands-on experience with Microsoft Azure and Databricks MLOps Experience : Expertise in implementing automated pipelines, model versioning, monitoring, and lifecycle management Domain-Specific AI Expertise : Deep knowledge in NLP, computer vision, predictive analytics, or other specialized AI techniques Excellent Problem-Solving Skills : Ability to identify and address technical challenges with innovative and efficient solutions Effective Communication Skills : Strong ability to convey complex technical concepts to both technical and non-technical audiences Attention to Detail : Ability to handle and execute highly specialized projects with a focus on quality Desired Skills & Qualifications Familiarity with Distributed Computing : Knowledge of Apache Spark and scalable training techniques for large datasets Experience with GenAI : Hands-on experience in developing enterprise-level Gen AI applications and models for business use cases Knowledge of Data Privacy & Security Standards : Understanding of ethical AI principles and ensuring compliance with relevant privacy and security standards Interpersonal Skills : Excellent collaboration and relationship-building skills with colleagues, stakeholders, and external partners (ref:hirist.tech) Show more Show less
Akira Insights is a boutique pure-play Data & Analytics consulting firm located in Bangalore, offering services to Multinational corporations, Large enterprises, and Fortune 500s across various sectors. At Akira Insights, we provide opportunities for professional growth and development in a dynamic and supportive work environment, along with a competitive salary and performance-based incentives. We are currently looking for a skilled Microsoft Power Platform Developer to join our team. The ideal candidate should have a strong background in data analytics, application development, and business process automation, with a focus on utilizing Power BI (60%), PowerApps (30%), and Power Automate (10%) to develop comprehensive business solutions. In this role, you will collaborate closely with stakeholders to understand their business needs and translate them into insightful and interactive reports, applications, and workflows. **Responsibilities:** **Power BI** - Develop, design, and maintain Power BI reports and dashboards. - Conduct data analysis to ensure data quality and integrity. - Optimize Power BI solutions for performance and scalability. - Create and maintain documentation, including technical specifications and user guides. - Integrate various data sources (e.g., SQL, Azure, Snowflake) into Power BI. - Configure and manage data gateways to ensure seamless data connectivity and refresh. **PowerApps** - Develop custom applications using PowerApps to meet business requirements. - Integrate PowerApps with various data sources and services. - Provide training and support to end-users on PowerApps tools and applications. **Power Automate** - Design and implement workflows using Power Automate to automate business processes. - Monitor and troubleshoot Power Automate workflows to ensure reliability and efficiency. **Requirements:** - Any Bachelors degree. - 5-7 years of proven experience as a Power Platform Developer or in a similar role. - Strong proficiency in Power BI, including DAX, Power Query, and M language. - Experience with PowerApps and Power Automate. - Proficiency in SQL and relational databases. - Familiarity with data warehousing concepts and ETL processes. - Experience with various data sources and integrating them into Power BI, such as SQL, Azure, and Snowflake. - Excellent analytical and problem-solving skills. - Strong communication and interpersonal skills. - Ability to work independently and as part of a team. **Preferred Qualifications:** - Microsoft Power Platform certification (e.g., Microsoft Certified: Power Platform App Maker Associate). - Knowledge of cloud platforms (e.g., Azure, AWS) and data services. - Experience working in consulting services is a plus.,
Job Summary We are seeking a skilled and experienced SSIS (SQL Server Integration Services) and SSAS (SQL Server Analysis Services) Developer to join our data engineering and business intelligence team. The ideal candidate will be responsible for developing, maintaining, and optimizing ETL packages and OLAP/Tabular models to support data warehousing and reporting Responsibilities : Design, develop, deploy, and maintain SSIS packages for data extraction, transformation, and loading (ETL). Develop and manage SSAS cubes (Multidimensional and/or Tabular Models) to support advanced analytics and reporting. Collaborate with business analysts, data architects, and stakeholders to understand data requirements. Optimize existing ETL processes for performance, scalability, and reliability. Create and maintain technical documentation, including data flow diagrams and data dictionaries. Monitor ETL workflows, troubleshoot issues, and implement data quality checks. Perform data validation and unit testing to ensure accuracy of ETL and cube outputs. Integrate SSIS/SSAS with Power BI, Excel, and other reporting tools. Participate in code reviews, sprint planning, and agile development Qualifications : Bachelors degree in Computer Science, Information Systems, or a related field. 3+ years of hands-on experience with SSIS and SSAS. Strong experience with SQL Server and T-SQL. Experience building both Multidimensional and Tabular SSAS models. Deep understanding of data warehousing concepts, star/snowflake schema, and ETL best practices. Familiarity with performance tuning in SSIS and SSAS. Proficient with data visualization tools like Power BI or Excel (PivotTables). Knowledge of version control systems such as Git or Skills : Experience with Azure Data Factory, Synapse, or other cloud-based data services. Exposure to DevOps CI/CD pipelines for SSIS/SSAS deployments. Familiarity with MDX and DAX query languages. Certification in Microsoft SQL Server BI Stack is a Skills : Strong analytical and problem-solving skills. Effective communication and collaboration skills. Ability to work independently and manage multiple tasks. (ref:hirist.tech)
You will join our data engineering and business intelligence team as an SSIS (SQL Server Integration Services) and SSAS (SQL Server Analysis Services) Developer. Your primary responsibilities will include designing, developing, deploying, and maintaining SSIS packages for ETL processes and managing SSAS cubes for advanced analytics and reporting. Collaboration with business analysts, data architects, and stakeholders to grasp data requirements will be essential. You will need to optimize existing ETL processes for improved performance, scalability, and reliability. Additionally, creating and maintaining technical documentation, monitoring ETL workflows, troubleshooting issues, implementing data quality checks, and performing data validation and unit testing are crucial tasks. Integration of SSIS/SSAS with reporting tools like Power BI, Excel, and participation in code reviews, sprint planning, and agile development are part of your responsibilities. A Bachelor's degree in Computer Science, Information Systems, or a related field, along with at least 3 years of hands-on experience with SSIS and SSAS is required. Strong proficiency in SQL Server, T-SQL, and building both Multidimensional and Tabular SSAS models is necessary. A deep understanding of data warehousing concepts, star/snowflake schema, ETL best practices, and performance tuning in SSIS and SSAS is expected. Proficiency in data visualization tools such as Power BI or Excel (PivotTables) is preferred. Experience with Azure Data Factory, Synapse, or other cloud-based data services, exposure to DevOps CI/CD pipelines for SSIS/SSAS deployments, familiarity with MDX and DAX query languages, and certification in Microsoft SQL Server BI Stack will be advantageous. Strong analytical and problem-solving skills, effective communication, collaboration abilities, and the capacity to work independently while managing multiple tasks are qualities we are looking for in the ideal candidate.,
You will be joining our dynamic team as a highly skilled Senior Data Engineer/DE Architect with 7-10 years of experience. Your expertise in data engineering technologies, particularly SQL, Databricks, Azure services, and client interaction will be crucial for this role. Your responsibilities will include: - Hands-on experience with SQL, Databricks, pyspark, Python, Azure Cloud, and Power BI. - Designing, developing, and optimizing pyspark workloads. - Writing scalable, modular, and reusable code in SQL, python, and pyspark. - Collaborating with client stakeholders and cross-functional teams. - Gathering and analyzing requirements, translating business needs into technical solutions. - Providing regular project updates and reports on progress. - Ensuring alignment of data solutions with business requirements. - Working in US shift hours to coordinate with global teams. We expect you to have: - 8-10 years of experience in data engineering or related fields. - Proficiency in SQL, Databricks, PySpark, Python, Azure Cloud, and Power BI. - Strong written and verbal communication skills. - Proven ability to collaborate effectively with global stakeholders. - Strong problem-solving skills and attention to detail. Apply now and be part of our innovative team!,
As the Head of Delivery Management in our organization, you will play a crucial role in leading our delivery operations with a focus on Data Engineering and Data Analytics. Your primary responsibility will be to oversee the end-to-end execution of projects related to data pipelines, analytics platforms, and data-driven solutions. Your expertise in managing projects, optimizing delivery processes, and fostering continuous improvement will be essential in working collaboratively with cross-functional teams comprising data scientists, analysts, and engineers. Your key responsibilities will include leading and overseeing delivery teams, developing strategies for data-centric project delivery, ensuring successful delivery of data solutions, monitoring delivery performance, and collaborating with teams to address challenges in data architecture, integration, and scalability. You will be required to drive continuous improvement in processes, methodologies, and tools tailored to data projects, maintain strong client and stakeholder relationships, and ensure adherence to best practices in data security, privacy, and compliance. Effective resource management, fostering a culture of innovation, collaboration, and accountability within the delivery team will also be important aspects of your role. To be successful in this position, you should have a minimum of 15 years of experience in delivery management, with at least 5 years specifically in Data Engineering or Data Analytics domains. Your proven track record in delivering large-scale data projects involving ETL processes, cloud platforms, or data warehouses, along with a strong understanding of data architecture, big data technologies, and analytics frameworks will be highly valuable. Exceptional leadership and team management skills, excellent project management abilities with exposure to agile methodologies, and familiarity with tools like Tableau, Power BI, Snowflake, Hadoop, or similar platforms are essential requirements. Moreover, your strong analytical and problem-solving skills, experience with financial planning and resource management in data projects, deep understanding of industry trends in data and analytics, and proven ability to drive stakeholder alignment and ensure delivery excellence will set you up for success in this role. If you are passionate about leading teams and delivering excellence in data-driven initiatives, we welcome you to bring your expertise to our team and contribute to our mission of driving innovation and success in the data engineering and analytics space.,
About The Role We are seeking a highly skilled Web Scraping & Python API Developer to build and maintain scalable data extraction systems from various websites and APIs. The ideal candidate has hands-on experience with web scraping frameworks, RESTful API development, and data integration techniques. Responsibilities Design and develop robust, scalable web scraping scripts using Python (e.g., Scrapy, BeautifulSoup, Selenium). Build and maintain RESTful APIs to serve scraped data to internal systems or clients. Handle anti-bot mechanisms like CAPTCHAs, JavaScript rendering, and IP rotation. Optimize scraping processes for speed, reliability, and data integrity. Parse and normalize structured and unstructured data (HTML, JSON, XML). Monitor and maintain scraping pipelines; handle failures and site structure changes. Implement logging, error handling, and reporting mechanisms. Collaborate with product managers and data analysts to define data requirements. Ensure compliance with website terms of service and data use regulations. Requirements 3+ years of experience with Python, especially in data extraction and web automation. Strong knowledge of web scraping libraries (Scrapy, BeautifulSoup, Requests, Selenium). Experience with REST API development (FastAPI, Flask, or Django REST Framework). Proficient with data handling libraries (Pandas, JSON, Regex). Experience working with proxies, headless browsers, and CAPTCHA solving tools. Familiarity with containerization (Docker) and deployment on cloud platforms (AWS, GCP, Azure). Strong understanding of HTML, CSS, JavaScript (from a scraping perspective). Experience with version control (Git) and agile development methodologies. Nice To Have Experience with GraphQL scraping. Familiarity with CI/CD pipelines and DevOps tools. Knowledge of data storage solutions (PostgreSQL, MongoDB, Elasticsearch). Prior experience with large-scale web crawling infrastructure. Benefits Competitive salary and performance bonuses. Flexible work hours and remote work option. Opportunity to work on high-impact, data-driven products. Learning budget for conferences, books, and courses. (ref:hirist.tech)
As a Lead Data Engineer at our organization, you will be responsible for utilizing your expertise in data engineering technologies, particularly in SQL, Databricks, Azure services, and client interaction. Your role will involve designing, developing, and optimizing pyspark workloads while ensuring the scalability, modularity, and reusability of code in SQL, Python, and pyspark. A key aspect of this role is your ability to effectively communicate with client stakeholders and collaborate with diverse teams to deliver successful outcomes. Engagement with stakeholders to gather and analyze requirements, as well as providing regular updates and reports on project status and progress will be crucial components of your responsibilities. You will play a vital role in aligning data solutions with business objectives to drive value for the organization. In addition to your technical skills, your ability to work during US shift hours to coordinate with global teams is essential for this role. You should possess proficiency in SQL, Databricks, PySpark, Python, Azure Cloud, and Power BI, along with strong communication skills, both written and verbal. Your proven track record of working effectively with global stakeholders, strong problem-solving abilities, and attention to detail will be valuable assets in this role.,