Quick Start

This document is for the AI Cloud Platform beta version (v0.1.0)

Best practices and troubleshooting guide to help you quickly get started with the Chipltech AI Computing Cloud Platform

01Registration

Before using the platform, you need to register an account (beta version accounts are obtained through manual application). After obtaining an account, you can login at the login page:

02Create Instance

After logging in, you can create your first computing instance:

  • Choose appropriate computing configuration
  • Select runtime image
  • Configure storage, mount network disk
  • Start instance

Note: The system disk is mounted at the container's root directory (/), data is preserved during stop and restart operations, but will be deleted when the container is released. Network disk expansion is only supported on the network disk management page.

03Use Instance

After creating an instance, you can use it from the "Instance Management" page in the console:

  • Connect to the instance via SSH, Jupyter Notebook, etc.
  • Upload your data and code
  • Start your AI training tasks
  • Monitor instance status and training progress

04Network Disk Management

The platform provides convenient network disk functionality to help you manage data and models:

  • Network disk "Cloud Storage" provides persistent storage space for container instances with an independent lifecycle, featuring high capacity, high reliability, scalability, and low cost. Users can dynamically expand storage space after creating a cloud disk without losing existing data.
  • Creation and management: The platform creates a default network disk for each user (which cannot be deleted). Users can create and manage other disks in the console, including expansion and deletion operations.
  • Mounting: Users can choose to mount a network disk when creating an instance. Only one network disk can be mounted, and it will be automatically mounted to the /net directory of the instance.
  • Multiple mounting: The same network disk can be mounted to multiple container instances, allowing multiple instances to share data on the network disk.
  • Expansion: Users can expand network disks in the console, automatically increasing the disk capacity.
  • Deletion: Users can delete network disks in the console. After deletion, the data on the network disk will be cleared and the network disk will be released.

05Image Management

The platform supports various image management functions for convenient development environment configuration:

  • Choose from preset official images (PyTorch, Llama, IREE, etc.)
  • Save custom images as private or public images. Public images can be used by other users.
  • Share images with team members

07Instance Operations

StatusOperationButton
TerminatedEnabledStart
RunningEnabledStop
WaitingEnabledRestart
FailedEnabledSave Image
EnabledRelease

08DeepSeek Chatbot Service

The platform integrates a large language model service based on DeepSeek-R1, providing powerful natural language processing and reasoning capabilities:

  • Thought process visualization - View the model's reasoning steps to help you understand how the model reaches conclusions
  • Strong mathematical and logical capabilities - Solve complex problems and perform multi-step reasoning
  • Code generation and analysis - Write, explain, and debug code in various programming languages
  • Support for continuous dialogue - Maintain context understanding for in-depth communication

How to Use:

  1. Click "Try Now" on the "DeepSeek Chatbot" application in the "AI Application Market" to enter the chatbot dialogue page
  2. Enter your question or instruction in the input box
  3. Send the message and view the AI reply, including the thought process
  4. Save useful information using the copy button on the interface
DeepSeek Chatbot Interface

DeepSeek Chatbot Interface Display

09DeepSeek API Service

The platform provides API services for the DeepSeek-R1 model, making it convenient for developers to integrate large language model capabilities into their applications:

  • OpenAI compatible format - Uses the same API request format as OpenAI, facilitating migration of existing applications
  • Streaming response - Supports streaming output, implementing typewriter effect to enhance user experience
  • System prompt customization - Control model behavior and output style through system prompts
  • Internal network low latency - Internal network environment access ensures response speed

API Endpoint

https://deepseek.chipltech.com/v1/chat/completions

Note: This API is only available in the internal network environment

DeepSeek API Service Architecture

DeepSeek API Service Architecture Diagram

Example Call (cURL)

curl https://deepseek.chipltech.com/v1/chat/completions \
-X POST \
-H "Authorization: Bearer sk-daxg7wrsh3no43mo" \
-H "Content-Type: application/json" \
-d '{
    "model": "/mnt/jfs/models/DeepSeek-R1-Distill-Qwen-7B",
    "messages": [
      {"role": "system", "content": "You are a helpful assistant."},
      {"role": "user", "content": "9.8和9.11谁大?"}
    ],
    "stream": true,
    "temperature": 0.01
  }'

For more detailed information and usage examples, please check the Model Management page. sidebar.instances

Key Features

DeepSeek-R1 Large Language Model

The platform integrates the DeepSeek-R1 large language model, providing powerful natural language processing and reasoning capabilities. Through two usage methods, it meets the needs of different scenarios:

  • Interactive Chatbot interface - Directly converse with the model in the browser and view detailed thinking processes
  • RESTful API - Integrate into your applications through standard APIs, supporting streaming responses
  • OpenAI compatible format - Use request formats compatible with OpenAI to facilitate migration of existing applications