Job Title: Data Scientist Location: Chandigarh (Work from Office) Experience Required: 3+ Years Company: SparkBrains Private Limited Job Description: We are looking for a Data Scientist with 3+ years of hands-on experience in Machine Learning, Deep Learning, and Large Language Models (LLMs). The ideal candidate should possess strong analytical skills, expertise in data modeling, and the ability to develop and deploy AI-driven solutions that add value to our business and clients. Key Responsibilities: 1) Data Collection & Preprocessing: Gather, clean, and prepare structured and unstructured data for model training and evaluation. 2) Model Development: Design, develop, and optimize machine learning and deep learning models to solve real-world business problems. 3) LLM Integration: Build, fine-tune, and deploy Large Language Models (LLMs) for various NLP tasks, including text generation, summarization, and sentiment analysis. 4) Feature Engineering: Identify relevant features and implement feature extraction techniques to improve model accuracy. 5) Model Evaluation & Optimization: Conduct rigorous evaluation and fine-tuning to enhance model performance and ensure scalability. 6) Data Visualization & Insights: Create dashboards and reports to communicate findings and insights effectively to stakeholders. 7) API Development & Deployment: Develop APIs and integrate AI/ML models into production systems. 8) Collaboration & Documentation: Collaborate with cross-functional teams to understand business requirements and document all processes and models effectively. Required Skills & Qualifications: Education: Bachelor’s or Master’s degree in Computer Science, Data Science, AI, Machine Learning, or a related field. Experience: Minimum 3+ years of proven experience as a Data Scientist/AI Engineer. Technical Skills: 1) Proficiency in Python and relevant ML/AI libraries such as TensorFlow, PyTorch, Scikit-Learn, etc. 2) Hands-on experience with LLMs (e.g., OpenAI, Hugging Face, etc.) and fine-tuning models. 3) Strong understanding of Natural Language Processing (NLP), neural networks, and deep learning architectures. 4) Knowledge of data wrangling, data visualization, and feature engineering techniques. 5) Experience with APIs, cloud platforms (AWS, Azure, or GCP), and deployment of AI models in production. 6) Analytical & Problem-Solving Skills: Strong analytical mindset with the ability to interpret data and derive actionable insights. 7) Communication Skills: Excellent verbal and written communication skills to effectively collaborate with technical and non-technical teams. Why Join Us? - Opportunity to work on cutting-edge AI/ML projects. - Collaborative work environment with a focus on continuous learning. - Exposure to diverse industries and domains. - Competitive salary and growth opportunities.
Job Title: Business Development Manager (BDM) Experience: 4+ Years Location: IT Park, Chandigarh Employment Type: Full-Time About SparkBrains: SparkBrains is a fast-growing technology company specialising in innovative digital solutions, AI, and software development for clients worldwide. We’re looking for a proactive and results-driven Business Development Manager to join our team and help us expand our market presence. Key Responsibilities: - Develop and execute a growth strategy focused on both financial gain and customer satisfaction. - Conduct research to identify new markets, customer needs, and potential business opportunities. - Bid on platforms like Upwork, Freelancer to acquire new projects and clients. - Arrange and conduct business meetings with prospective clients. - Promote SparkBrains’ products/services by addressing or anticipating clients’ objectives. - Prepare sales proposals and contracts, ensuring adherence to legal and company guidelines. - Maintain accurate records of sales, revenue, invoices, and client communications. - Provide trustworthy feedback and after-sales support. - Build long-term relationships with new and existing customers. - Collaborate with internal teams to ensure client requirements are met effectively. Requirements: - Proven working experience as a Business Development Executive/Manager, Sales Executive, or similar role. - Minimum 4 year of experience in Upwork/Freelancer bidding. - Proficiency in MS Office and CRM software (e.g., Zoho, HubSpot) is a plus. - Excellent communication, negotiation, and presentation skills. - Ability to build rapport and maintain professional relationships. - Goal-oriented, self-motivated, and able to work independently. Benefits: - Competitive salary with performance-based incentives. - Opportunities for career growth and skill development. - Friendly and collaborative work environment.
SparkBrains Private Limited is looking for a dynamic and results-driven Business Development Manager to join our growing team. If you are passionate about building client relationships, identifying new business opportunities, and driving growth in the IT/AI services industry, we’d love to hear from you! Key Responsibilities: Identify and generate new business opportunities in domestic and international markets. Build and maintain strong client relationships through proactive communication. Develop strategies to expand company reach and drive revenue growth. Work closely with the sales and marketing team to achieve business targets. Prepare proposals, pitch presentations, and negotiate contracts. Requirements: 3+ years of proven experience in business development or sales (preferably in IT, software services, or AI solutions). Strong communication, negotiation, and presentation skills. Experience with LinkedIn Sales Navigator, lead generation, and client outreach. Ability to work independently and as part of a team. Goal-oriented with a track record of achieving or exceeding targets. Location: Chandigarh (On-site) Employment Type: Full-time
As a Data Engineer, your role involves managing existing data pipelines and creating new pipelines following best practices for data ingestion. You will continuously monitor data ingestion through Change Data Capture for incremental loads and analyze and fix any failed batch job schedules to capture data effectively. Key Responsibilities: - Manage existing data pipelines for data ingestion - Create and manage new data pipelines following best practices - Continuously monitor data ingestion through Change Data Capture - Analyze and fix any failed batch job schedules - Maintain and update technical documentation of ingested data - Maintain the centralized data dictionary with necessary data classifications - Extract data from sources, clean it, and ingest it into a big data platform - Define automation for data cleaning before ingestion - Handle missing data, remove outliers, and resolve inconsistencies - Perform data quality checks for accuracy, completeness, consistency, timeliness, believability, and interpretability - Expose data views/models to reporting and source systems using tools like Hive or Impala - Provide cleansed data to the AI team for building data science models - Implement and configure Informatica Enterprise Data Catalog (EDC) to discover and catalog data assets - Develop and maintain custom metadata scanners, resource configurations, and lineage extraction processes - Integrate EDC with other Informatica tools for data quality, master data management, and data governance - Define and implement data classification, profiling, and quality rules for improved data visibility and trustworthiness - Collaborate with data stewards, owners, and governance teams to maintain business glossaries, data dictionaries, and data lineage information - Establish and maintain data governance policies, standards, and procedures within the EDC environment - Monitor and troubleshoot EDC performance issues for optimal data availability - Train and support end-users in utilizing the data catalog for data discovery and analysis - Stay updated with industry best practices and trends to improve data catalog implementation - Collaborate with cross-functional teams to drive data catalog adoption and ensure data governance compliance Qualifications Required: - Certified Big Data Engineer from Cloudera/AWS/Azure - Expertise in Big data products like Cloudera stack - Proficient in Big Data querying tools (Hive, Hbase, Impala) - Strong experience in Spark using Python/Scala - Knowledge of messaging systems like Kafka or RabbitMQ - Hands-on experience in managing Hadoop clusters and ETL processes - Ability to design solutions independently based on high-level architecture - Proficiency in NoSQL databases like HBase - Strong knowledge of data management, governance, and metadata concepts - Proficient in SQL and various databases/data formats - Experience with data integration, ETL/ELT processes, and Informatica Data Integration As a Data Engineer, your role involves managing existing data pipelines and creating new pipelines following best practices for data ingestion. You will continuously monitor data ingestion through Change Data Capture for incremental loads and analyze and fix any failed batch job schedules to capture data effectively. Key Responsibilities: - Manage existing data pipelines for data ingestion - Create and manage new data pipelines following best practices - Continuously monitor data ingestion through Change Data Capture - Analyze and fix any failed batch job schedules - Maintain and update technical documentation of ingested data - Maintain the centralized data dictionary with necessary data classifications - Extract data from sources, clean it, and ingest it into a big data platform - Define automation for data cleaning before ingestion - Handle missing data, remove outliers, and resolve inconsistencies - Perform data quality checks for accuracy, completeness, consistency, timeliness, believability, and interpretability - Expose data views/models to reporting and source systems using tools like Hive or Impala - Provide cleansed data to the AI team for building data science models - Implement and configure Informatica Enterprise Data Catalog (EDC) to discover and catalog data assets - Develop and maintain custom metadata scanners, resource configurations, and lineage extraction processes - Integrate EDC with other Informatica tools for data quality, master data management, and data governance - Define and implement data classification, profiling, and quality rules for improved data visibility and trustworthiness - Collaborate with data stewards, owners, and governance teams to maintain business glossaries, data dictionaries, and data lineage information - Establish and maintain data governance policies, standards, and procedures within the EDC environment - Monitor and troubleshoot EDC performance issues for optimal data availability - Train and support end-users in utilizing the data catalog for dat